Research Reports of the Max Planck Institute for Informatics

  • MPI for Informatics, Max Planck Society (ed), "Eleventh Biennial Report: May 2011 - March 2013", in Biennial Report / Max-Planck-Institut für Informatik, (2013), Vol. 11, pp. 750 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Eleventh Biennial Report: May 2011 - March 2013", in Biennial Report / Max-Planck-Institut für Informatik, (2013), Vol. 11, pp. 750 p.
    Export
  • Maximilian Dylla, Iris Miliaraki, and Martin Theobald, "Top-k Query Processing in Probabilistic Databases with Non-materialized Views", in Research Report, (2012).  
    Citation
    Maximilian Dylla, Iris Miliaraki, and Martin Theobald, "Top-k Query Processing in Probabilistic Databases with Non-materialized Views", in Research Report, (2012).
    Export
  • MPI for Informatics, Max Planck Society (ed), "Tenth Biennial Report: May 2009 - April 2011", in Biennial Report / Max-Planck-Institut für Informatik, (2011), Vol. 10, pp. 680 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Tenth Biennial Report: May 2009 - April 2011", in Biennial Report / Max-Planck-Institut für Informatik, (2011), Vol. 10, pp. 680 p.
    Export
  • Avishek Anand, Srikanta Bedathur, Klaus Berberich, and Ralf Schenkel, "Temporal Index Sharding for Space-time Efficiency in Archive Search", in Research Report, (2011).  
    Citation
    Avishek Anand, Srikanta Bedathur, Klaus Berberich, and Ralf Schenkel, "Temporal Index Sharding for Space-time Efficiency in Archive Search", in Research Report, (2011).
    Abstract
    Time-travel queries that couple temporal constraints with keyword queries are useful in searching large-scale archives of time-evolving content such as the Web, document collections, wikis, and so on. Typical approaches for efficient evaluation of these queries involve \emph{slicing} along the time-axis either the entire collection~\cite{253349}, or individual index lists~\cite{kberberi:sigir2007}. Both these methods are not satisfactory since they sacrifice compactness of index for processing efficiency making them either too big or, otherwise, too slow. We present a novel index organization scheme that \emph{shards} the index with \emph{zero increase in index size}, still minimizing the cost of reading index index entries during query processing. Based on the optimal sharding thus obtained, we develop practically efficient sharding that takes into account the different costs of random and sequential accesses. Our algorithm merges shards from the optimal solution carefully to allow for few extra sequential accesses while gaining significantly by reducing the random accesses. Finally, we empirically establish the effectiveness of our novel sharding scheme via detailed experiments over the edit history of the English version of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of the UK governmental web sites ($\approx$ 400 GB). Our results demonstrate the feasibility of faster time-travel query processing with no space overhead.
    Export
  • Alexander Berner, Oliver Burghard, Michael Wand, Niloy Mitra, Reinhard Klein, and Hans-Peter Seidel, "A Morphable Part Model for Shape Manipulation", in Research Report, (2011), pp. 33 p.  
    Citation
    Alexander Berner, Oliver Burghard, Michael Wand, Niloy Mitra, Reinhard Klein, and Hans-Peter Seidel, "A Morphable Part Model for Shape Manipulation", in Research Report, (2011), pp. 33 p.
    Abstract
    We introduce morphable part models for smart shape manipulation using an assembly of deformable parts with appropriate boundary conditions. In an analysis phase, we characterize the continuous allowable variations both for the individual parts and their interconnections using Gaussian shape models with low rank covariance. The discrete aspect of how parts can be assembled is captured using a shape grammar. The parts and their interconnection rules are learned semi-automatically from symmetries within a single object or from semantically corresponding parts across a larger set of example models. The learned discrete and continuous structure is encoded as a graph. In the interaction phase, we obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult to edit using existing structure-aware deformation techniques. Unlike previous techniques, our method uses self-similarities from a single model as training input and allows the user to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries.
    Export
  • Werner Damm, Stefan Disch, Willem Hagemann, Christoph Scholl, Uwe Waldmann, and Boris Wirtz, "Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems", in AVACS Technical Reports, (2011), Vol. 76.  
    Citation
    Werner Damm, Stefan Disch, Willem Hagemann, Christoph Scholl, Uwe Waldmann, and Boris Wirtz, "Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems", in AVACS Technical Reports, (2011), Vol. 76.
    Abstract
    We describe an approach to integrate incremental ow pipe computation into a fully symbolic backward model checker for hybrid systems. Our method combines the advantages of symbolic state set representation, such as the ability to deal with large numbers of boolean variables, with an effcient way to handle continuous ows dened by linear differential equations, possibly including bounded disturbances.
    Export
  • Werner Damm, Carsten Ihlemann, and Viorica Sofronie-Stokkermans, "PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata", in AVACS Technical Report, (2011), pp. 31 p.  
    Citation
    Werner Damm, Carsten Ihlemann, and Viorica Sofronie-Stokkermans, "PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata", in AVACS Technical Report, (2011), pp. 31 p.
    Abstract
    This paper identifies an industrially relevant class of linear hybrid automata (LHA) called reasonable LHA for which parametric verification of convex safety properties with exhaustive entry states can be verified in polynomial time and time-bounded reachability can be decided in nondeterministic polynomial time for non-parametric verification and in exponential time for parametric verification. Properties with exhaustive entry states are restricted to runs originating in a (specified) inner envelope of some mode-invariant. Deciding whether an LHA is reasonable is shown to be decidable in polynomial time.
    Export
  • Tianxiang Lu, Stephan Merz, and Christoph Weidenbach, "Towards Verification of the Pastry Protocol using TLA+", in Research Report, (2011), pp. 51 p.  
    Citation
    Tianxiang Lu, Stephan Merz, and Christoph Weidenbach, "Towards Verification of the Pastry Protocol using TLA+", in Research Report, (2011), pp. 51 p.
    Abstract
    Pastry is an algorithm that provides a scalable distributed hash table over an underlying P2P network. Several implementations of Pastry are available and have been applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience to churn and fault tolerance, it makes an interesting target for verication. We have modeled Pastry's core routing algorithms and communication protocol in the specication language TLA+. In order to validate the model and to search for bugs we employed the TLA+ model checker tlc to analyze several qualitative properties. We obtained non-trivial insights in the behavior of Pastry through the model checking analysis. Furthermore, we started to verify Pastry using the very same model and the interactive theorem prover tlaps for TLA+. A rst result is the reduction of global Pastry correctness properties to invariants of the underlying data structures.
    Export
  • Bilyana Taneva, M. Kacimi El Hassani, and Gerhard Weikum, "Finding Images of Rare and Ambiguous Entities", in Research Report, (2011), pp. 30 p.  
    Citation
    Bilyana Taneva, M. Kacimi El Hassani, and Gerhard Weikum, "Finding Images of Rare and Ambiguous Entities", in Research Report, (2011), pp. 30 p.
    Export
  • James Tompkin, Kwang In Kim, Jan Kautz, and Christian Theobalt, "Videoscapes: Exploring Unstructured Video Collections", in Research Report, (2011), pp. 32 p.  
    Citation
    James Tompkin, Kwang In Kim, Jan Kautz, and Christian Theobalt, "Videoscapes: Exploring Unstructured Video Collections", in Research Report, (2011), pp. 32 p.
    Export
  • Ernst Althaus, Sebastian Altmeyer, and Rouven Naujoks, "A New Combinatorial Approach to Parametric Path Analysis", in AVACS Technical Report, (2010), pp. 33 p.  
    Citation
    Ernst Althaus, Sebastian Altmeyer, and Rouven Naujoks, "A New Combinatorial Approach to Parametric Path Analysis", in AVACS Technical Report, (2010), pp. 33 p.
    Abstract
    Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis.
    Export
  • Avishek Anand, Srikanta Bedathur, Klaus Berberich, and Ralf Schenkel, "Efficient Temporal Keyword Queries over Versioned Text", in Research Report, (2010), pp. 39 p.  
    Citation
    Avishek Anand, Srikanta Bedathur, Klaus Berberich, and Ralf Schenkel, "Efficient Temporal Keyword Queries over Versioned Text", in Research Report, (2010), pp. 39 p.
    Export
  • Eric Berberich, Michael Hemmer, and Michael Kerber, "A Generic Algebraic Kernel for Non-linear Geometric Applications", in Rapport de recherche / INRIA, (2010), pp. 20 p.  
    Citation
    Eric Berberich, Michael Hemmer, and Michael Kerber, "A Generic Algebraic Kernel for Non-linear Geometric Applications", in Rapport de recherche / INRIA, (2010), pp. 20 p.
    Export
  • Klaus Berberich, Srikanta Bedathur, Omar Alonso, and Gerhard Weikum, "A Language Modeling Approach for Temporal Information Needs", in Research Report, (2010), pp. 29 p.  
    Citation
    Klaus Berberich, Srikanta Bedathur, Omar Alonso, and Gerhard Weikum, "A Language Modeling Approach for Temporal Information Needs", in Research Report, (2010), pp. 29 p.
    Abstract
    This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user's query. Temporal expressions such as \textsf{``in the 1990s''} are frequent, easily extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent uncertainty. It is often unclear which exact time interval a temporal expression refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens of the retrieval model and considering their inherent uncertainty. Experiments on the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments demonstrate that our approach yields substantial improvements in retrieval effectiveness.
    Export
  • Andreas Broschart and Ralf Schenkel, "Real-time Text Queries with Tunable Term Pair Indexes", in Research Report, (2010), pp. 41 p.  
    Citation
    Andreas Broschart and Ralf Schenkel, "Real-time Text Queries with Tunable Term Pair Indexes", in Research Report, (2010), pp. 41 p.
    Abstract
    Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.
    Export
  • Anish Das Sarma, Martin Theobald, and Jennifer Widom, "LIVE: A Lineage-Supported Versioned DBMS", in Technical Report, (2010), pp. 13 p.  
    Citation
    Anish Das Sarma, Martin Theobald, and Jennifer Widom, "LIVE: A Lineage-Supported Versioned DBMS", in Technical Report, (2010), pp. 13 p.
    Export
  • Shady Elbassuoni, Maya Ramanath, and Gerhard Weikum, "Query Relaxation for Entity-relationship Search", in Report, (2010).  
    Citation
    Shady Elbassuoni, Maya Ramanath, and Gerhard Weikum, "Query Relaxation for Entity-relationship Search", in Report, (2010).
    Export
  • Johannes Faber, Carsten Ihlemann, Swen Jacobs, and Viorica Sofronie-Stokkermans, "Automatic Verification of Parametric Specifications with Complex Topologies", in Technical Report / Sonderforschungsbereich/Transregio 14 AVACS (Automatic Verification and Analysis of Complex Systems), (2010), pp. 40 p.  
    Citation
    Johannes Faber, Carsten Ihlemann, Swen Jacobs, and Viorica Sofronie-Stokkermans, "Automatic Verification of Parametric Specifications with Complex Topologies", in Technical Report / Sonderforschungsbereich/Transregio 14 AVACS (Automatic Verification and Analysis of Complex Systems), (2010), pp. 40 p.
    Abstract
    The focus of this paper is on reducing the complexity in verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For specifications, we use the modular language CSP-OZ-DC, which allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit modularity in theorem proving for rich data structures and use this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of various components which interact. \end{itemize} We illustrate these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends previous examples by comprising a complex track topology with lists of track segments and trains with different routes.
    Export
  • Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum, "YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia", in Research Report, (2010), pp. 55 p.  
    Citation
    Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum, "YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia", in Research Report, (2010), pp. 55 p.
    Abstract
    We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space.
    Export
  • Chien-Chung Huang and Telikepalli Kavitha, "Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists", in Research Report, (2010), pp. 17 p.  
    Citation
    Chien-Chung Huang and Telikepalli Kavitha, "Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists", in Research Report, (2010), pp. 17 p.
    Abstract
    We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|$.
    Export
  • Carsten Ihlemann and Viorica Sofronie-Stokkermans, "System Description: H-PILoT (Version 1.9)", in Technical Report / Sonderforschungsbereich/Transregio 14 AVACS (Automatic Verification and Analysis of Complex Systems), (2010), pp. 45 p.  
    Citation
    Carsten Ihlemann and Viorica Sofronie-Stokkermans, "System Description: H-PILoT (Version 1.9)", in Technical Report / Sonderforschungsbereich/Transregio 14 AVACS (Automatic Verification and Analysis of Complex Systems), (2010), pp. 45 p.
    Abstract
    This system description provides an overview of H-PILoT (Hierarchical Proving by Instantiation in Local Theory extensions), a program for hierarchical reasoning in extensions of logical theories. H-PILoT reduces deduction problems in the theory extension to deduction problems in the base theory. Specialized provers and standard SMT solvers can be used for testing the satisfiability of the formulae obtained after the reduction. For a certain type of theory extension (namely for {\em local theory extensions}) this hierarchical reduction is sound and complete and -- if the formulae obtained this way belong to a fragment decidable in the base theory -- H-PILoT provides a decision procedure for testing satisfiability of ground formulae, and can also be used for model generation.
    Export
  • Carsten Ihlemann and Viorica Sofronie-Stokkermans, "On Hierarchical Reasoning in Combinations of Theories", in AVACS Technical Report, (2010), pp. 26 p.  
    Citation
    Carsten Ihlemann and Viorica Sofronie-Stokkermans, "On Hierarchical Reasoning in Combinations of Theories", in AVACS Technical Report, (2010), pp. 26 p.
    Abstract
    In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification.
    Export
  • Nicoleta Preda, F. Suchanek, Wenjun Yuan, and Gerhard Weikum, "Query Evaluation with Asymmetric Web Services", in Research Report, (2010), pp. 31 p.  
    Citation
    Nicoleta Preda, F. Suchanek, Wenjun Yuan, and Gerhard Weikum, "Query Evaluation with Asymmetric Web Services", in Research Report, (2010), pp. 31 p.
    Export
  • Stephan Seufert, Srikanta Bedathur, Julian Mestre, and Gerhard Weikum, "Bonsai: Growing Interesting Small Trees", in Research Report, (2010), pp. 32 p.  
    Citation
    Stephan Seufert, Srikanta Bedathur, Julian Mestre, and Gerhard Weikum, "Bonsai: Growing Interesting Small Trees", in Research Report, (2010), pp. 32 p.
    Abstract
    Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on the web.
    Export
  • Martin Suda, Christoph Weidenbach, and Patrick Wischnewski, "On the saturation of YAGO", in Research Report, (2010), pp. 50 p.  
    Citation
    Martin Suda, Christoph Weidenbach, and Patrick Wischnewski, "On the saturation of YAGO", in Research Report, (2010), pp. 50 p.
    Export
  • Art Tevs, Michael Wand, Ivo Ihrke, and Hans-Peter Seidel, "A Bayesian Approach to Manifold Topology Reconstruction", in Research Report, (2010), pp. 23 p.  
    Citation
    Art Tevs, Michael Wand, Ivo Ihrke, and Hans-Peter Seidel, "A Bayesian Approach to Manifold Topology Reconstruction", in Research Report, (2010), pp. 23 p.
    Abstract
    In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated.
    Export
  • Martin Theobald, Mauro Sozio, Fabian Suchanek, and Ndapandula Nakashole, "URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules", in Research Report, (2010), pp. 48 p.  
    Citation
    Martin Theobald, Mauro Sozio, Fabian Suchanek, and Ndapandula Nakashole, "URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules", in Research Report, (2010), pp. 48 p.
    Abstract
    We present URDF, an efficient reasoning framework for graph-based, nonschematic RDF knowledge bases and SPARQL-like queries. URDF augments first-order reasoning by a combination of soft rules, with Datalog-style recursive implications, and hard rules, in the shape of mutually exclusive sets of facts. It incorporates the common possible worlds semantics with independent base facts as it is prevalent in most probabilistic database approaches, but also supports semantically more expressive, probabilistic first-order representations such as Markov Logic Networks. As knowledge extraction on theWeb often is an iterative (and inherently noisy) process, URDF explicitly targets the resolution of inconsistencies between the underlying RDF base facts and the inference rules. Core of our approach is a novel and efficient approximation algorithm for a generalized version of the Weighted MAX-SAT problem, allowing us to dynamically resolve such inconsistencies directly at query processing time. Our MAX-SAT algorithm has a worst-case running time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded soft and hard rules, respectively, and it comes with tight approximation guarantees with respect to the shape of the rules and the distribution of confidences of facts they contain. Experiments over various benchmark settings confirm a high robustness and significantly improved runtime of our reasoning framework in comparison to state-of-the-art techniques for MCMC sampling such as MAP inference and MC-SAT. Keywords
    Export
  • MPI for Informatics, Max Planck Society (ed), "Ninth Biennial Report: April 2007 – April 2009", in Biennial Report / Max-Planck-Institut für Informatik, (2009), Vol. 9, pp. 676 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Ninth Biennial Report: April 2007 – April 2009", in Biennial Report / Max-Planck-Institut für Informatik, (2009), Vol. 9, pp. 676 p.
    Export
  • Srikanta Bedathur, Klaus Berberich, Jens Dittrich, Nikos Mamoulis, and Gerhard Weikum, "Scalable Phrase Mining for Ad-hoc Text Analytics", in Research Report, (2009), pp. 41 p.  
    Citation
    Srikanta Bedathur, Klaus Berberich, Jens Dittrich, Nikos Mamoulis, and Gerhard Weikum, "Scalable Phrase Mining for Ad-hoc Text Analytics", in Research Report, (2009), pp. 41 p.
    Export
  • Alexander Berner, Martin Bokeloh, Martin Wand, Andreas Schilling, and Hans-Peter Seidel, "Generalized intrinsic symmetry detection", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 33 p.  
    Citation
    Alexander Berner, Martin Bokeloh, Martin Wand, Andreas Schilling, and Hans-Peter Seidel, "Generalized intrinsic symmetry detection", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 33 p.
    Abstract
    In this paper, we address the problem of detecting partial symmetries in 3D objects. In contrast to previous work, our algorithm is able to match deformed symmetric parts: We first develop an algorithm for the case of approximately isometric deformations, based on matching graphs of surface feature lines that are annotated with intrinsic geometric properties. The sensitivity to non-isometry is controlled by tolerance parameters for each such annotation. Using large tolerance values for some of these annotations and a robust matching of the graph topology yields a more general symmetry detection algorithm that can detect similarities in structures that have undergone strong deformations. This approach for the first time allows for detecting partial intrinsic as well as more general, non-isometric symmetries. We evaluate the recognition performance of our technique for a number synthetic and real-world scanner data sets.
    Export
  • Gerard de Melo and Gerhard Weikum, "Towards a Universal Wordnet by Learning from Combined Evidenc", in Research Report, (2009), pp. 32 p.  
    Citation
    Gerard de Melo and Gerhard Weikum, "Towards a Universal Wordnet by Learning from Combined Evidenc", in Research Report, (2009), pp. 32 p.
    Abstract
    Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous applications in areas like NLP, IR, and AI. We propose a methodology for the automatic construction of a large-scale multilingual lexical database where words of many languages are hierarchically organized in terms of their meanings and their semantic relations to other words. This resource is bootstrapped from WordNet, a well-known English-language resource. Our approach extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages, drawing on evidence extracted from a variety of resources including existing (monolingual) wordnets, (mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions and statistical learning techniques are used to iteratively integrate this information and build an output graph. Experiments show that this wordnet has a high level of precision and coverage, and that it can be useful in applied tasks such as cross-lingual text classification.
    Export
  • Martin Fuchs, Tongbo Chen, Oliver Wang, Ramesh Raskar, Hendrik P. A. Lensch, and Hans-Peter Seidel, "A shaped temporal filter camera", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 25 p.  
    Citation
    Martin Fuchs, Tongbo Chen, Oliver Wang, Ramesh Raskar, Hendrik P. A. Lensch, and Hans-Peter Seidel, "A shaped temporal filter camera", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 25 p.
    Export
  • Vlastimil Havran, Jozef Zajac, Jiri Drahokoupil, and Hans-Peter Seidel, "MPI Informatics building model as data for your research", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 113 p.  
    Citation
    Vlastimil Havran, Jozef Zajac, Jiri Drahokoupil, and Hans-Peter Seidel, "MPI Informatics building model as data for your research", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 113 p.
    Abstract
    In this report we describe the MPI Informatics building model that provides the data of the Max-Planck-Institut f\"{u}r Informatik (MPII) building. We present our motivation for this work and its relationship to reproducibility of a scientific research. We describe the dataset acquisition and creation including geometry, luminaires, surface reflectances, reference photographs etc. needed to use this model in testing of algorithms. The created dataset can be used in computer graphics and beyond, in particular in global illumination algorithms with focus on realistic and predictive image synthesis. Outside of computer graphics, it can be used as general source of real world geometry with an existing counterpart and hence also suitable for computer vision.
    Export
  • Matthias Horbach and Christoph Weidenbach, "Decidability Results for Saturation-based Model Building", in Research Report, (2009), pp. 38 p.  
    Citation
    Matthias Horbach and Christoph Weidenbach, "Decidability Results for Saturation-based Model Building", in Research Report, (2009), pp. 38 p.
    Abstract
    Saturation-based calculi such as superposition can be successfully instantiated to decision procedures for many decidable fragments of first-order logic. In case of termination without generating an empty clause, a saturated clause set implicitly represents a minimal model for all clauses, based on the underlying term ordering of the superposition calculus. In general, it is not decidable whether a ground atom, a clause or even a formula holds in this minimal model of a satisfiable saturated clause set. Based on an extension of our superposition calculus for fixed domains with syntactic disequality constraints in a non-equational setting, we describe models given by ARM (Atomic Representations of term Models) or DIG (Disjunctions of Implicit Generalizations) representations as minimal models of finite saturated clause sets. This allows us to present several new decidability results for validity in such models. These results extend in particular the known decidability results for ARM and DIG representations.
    Export
  • Matthias Horbach and Christoph Weidenbach, "Superposition for Fixed Domains", in Research Report, (2009), pp. 49 p.  
    Citation
    Matthias Horbach and Christoph Weidenbach, "Superposition for Fixed Domains", in Research Report, (2009), pp. 49 p.
    Abstract
    Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal term-generated model for the theory. Proving universal properties with respect to a saturated theory directly leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired. Therefore, we propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given domain. This calculus is sound and refutationally complete in the limit for a first-order fixed domain semantics. For saturated Horn theories and classes of positive formulas, we can even employ the calculus to prove properties of the minimal model itself, going beyond the scope of known superposition-based approaches.
    Export
  • Matthias Horbach and Christoph Weidenbach, "Deciding the Inductive Validity of Forall Exists* Queries", (2009).  
    Citation
    Matthias Horbach and Christoph Weidenbach, "Deciding the Inductive Validity of Forall Exists* Queries", (2009).
    Abstract
    We present a new saturation-based decidability result for inductive validity. Let $\Sigma$ be a finite signature in which all function symbols are at most unary and let $N$ be a satisfiable Horn clause set without equality in which all positive literals are linear. If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause class, then it is decidable whether a sentence of the form $\forall\exists^* (A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
    Export
  • Matthias B. Hullin, Boris Ajdin, Johannes Hanika, Hans-Peter Seidel, Jan Kautz, and Hendrik P. A. Lensch, "Acquisition and analysis of bispectral bidirectional reflectance distribution functions", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 25 p.  
    Citation
    Matthias B. Hullin, Boris Ajdin, Johannes Hanika, Hans-Peter Seidel, Jan Kautz, and Hendrik P. A. Lensch, "Acquisition and analysis of bispectral bidirectional reflectance distribution functions", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 25 p.
    Abstract
    In fluorescent materials, energy from a certain band of incident wavelengths is reflected or reradiated at larger wavelengths, i.e. with lower energy per photon. While fluorescent materials are common in everyday life, they have received little attention in computer graphics. Especially, no bidirectional reflectance measurements of fluorescent materials have been available so far. In this paper, we develop the concept of a bispectral BRDF, which extends the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between wavelengths. Using a bidirectional and bispectral measurement setup, we acquire reflectance data of a variety of fluorescent materials, including vehicle paints, paper and fabric. We show bispectral renderings of the measured data and compare them with reduced versions of the bispectral BRDF, including the traditional RGB vector valued BRDF. Principal component analysis of the measured data reveals that for some materials the fluorescent reradiation spectrum changes considerably over the range of directions. We further show that bispectral BRDFs can be efficiently acquired using an acquisition strategy based on principal components.
    Export
  • Gjergji Kasneci, Shady Elbassuoni, and Gerhard Weikum, "MING: Mining Informative Entity-relationship Subgraphs", in Research Report, (2009), pp. 32 p.  
    Citation
    Gjergji Kasneci, Shady Elbassuoni, and Gerhard Weikum, "MING: Mining Informative Entity-relationship Subgraphs", in Research Report, (2009), pp. 32 p.
    Export
  • Thomas Neumann and Gerhard Weikum, "The RDF-3X Engine for Scalable Management of RDF Data", in Research Report, (2009).  
    Citation
    Thomas Neumann and Gerhard Weikum, "The RDF-3X Engine for Scalable Management of RDF Data", in Research Report, (2009).
    Abstract
    RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The ``pay-as-you-go'' nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude.
    Export
  • Nicoleta Preda, Fabian Suchanek, Gjergji Kasneci, Thomas Neumann, and Gerhard Weikum, "Coupling Knowledge Bases and Web Services for Active Knowledge", in Research Reports, (2009).  
    Citation
    Nicoleta Preda, Fabian Suchanek, Gjergji Kasneci, Thomas Neumann, and Gerhard Weikum, "Coupling Knowledge Bases and Web Services for Active Knowledge", in Research Reports, (2009).
    Abstract
    We present ANGIE, a system that can answer user queries by combining knowledge from a local database with knowledge retrieved from Web services. If a user poses a query that cannot be answered by the local database alone, ANGIE calls the appropriate Web services to retrieve the missing information. In ANGIE,Web services act as dynamic components of the knowledge base that deliver knowledge on demand. To the user, this is fully transparent; the dynamically acquired knowledge is presented as if it were stored in the local knowledge base. We have developed a RDF based model for declarative definition of functions embedded in the local knowledge base. The results of available Web services are cast into RDF subgraphs. Parameter bindings are automatically constructed by ANGIE, services are invoked, and the semi-structured information returned by the services are dynamically integrated into the knowledge base We have developed a query rewriting algorithm that determines one or more function composition that need to be executed in order to evaluate a SPARQL style user query. The key idea is that the local knowledge base can be used to guide the selection of values used as input parameters of function calls. This is in contrast to the conventional approaches in the literature which would exhaustively materialize all values that can be used as binding values for the input parameters.
    Export
  • Maya Ramanath, Kondreddi Sarath Kumar, and Georgiana Ifrim, "Generating Concise and Readable Summaries of XML documents", in Research Reports, (2009).  
    Citation
    Maya Ramanath, Kondreddi Sarath Kumar, and Georgiana Ifrim, "Generating Concise and Readable Summaries of XML documents", in Research Reports, (2009).
    Export
  • Andrey Rybalchenko and Viorica Sofronie-Stokkermans, "Constraint Solving for Interpolation", (2009).  
    Citation
    Andrey Rybalchenko and Viorica Sofronie-Stokkermans, "Constraint Solving for Interpolation", (2009).
    Export
  • Thomas Schultz, Joachim Weickert, and Hans-Peter Seidel, "A Higher-order Structure Tensor", in Research Report, (2009).  
    Citation
    Thomas Schultz, Joachim Weickert, and Hans-Peter Seidel, "A Higher-order Structure Tensor", in Research Report, (2009).
    Abstract
    Structure tensors are a common tool for orientation estimation in image processing and computer vision. We present a generalization of the traditional second-order model to a higher-order structure tensor (HOST), which is able to model more than one significant orientation, as found in corners, junctions, and multi-channel images. We provide a theoretical analysis and a number of mathematical tools that facilitate practical use of the HOST, visualize it using a novel glyph for higher-order tensors, and demonstrate how it can be applied in an improved integrated edge, corner, and junction
    Export
  • Carsten Stoll, "Optical reconstruction of detailed animatable human body models", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 37 p.  
    Citation
    Carsten Stoll, "Optical reconstruction of detailed animatable human body models", in Research Report / Max-Planck-Institut für Informatik, (2009), pp. 37 p.
    Export
  • Christoph Weidenbach and Patrick Wischnewski, "Contextual Rewriting", (2009).  
    Citation
    Christoph Weidenbach and Patrick Wischnewski, "Contextual Rewriting", (2009).
    Export
  • Deepak Ajwani, Itay Malinger, Ulrich Meyer, and Sivan Toledo, "Characterizing the performance of Flash memory storage devices and its impact on algorithm design", in Research Report, (2008), pp. 36 p.  
    Citation
    Deepak Ajwani, Itay Malinger, Ulrich Meyer, and Sivan Toledo, "Characterizing the performance of Flash memory storage devices and its impact on algorithm design", in Research Report, (2008), pp. 36 p.
    Abstract
    Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.
    Export
  • Eric Berberich, Michael Hemmer, Menelaos Karavelas, Sylvain Pion, Monique Teillaud, and Elias Tsigaridas, "Prototype Implementation of the Algebraic Kernel", (University of Groningen, Groningen, 2008).  
    Citation
    Eric Berberich, Michael Hemmer, Menelaos Karavelas, Sylvain Pion, Monique Teillaud, and Elias Tsigaridas, "Prototype Implementation of the Algebraic Kernel", (University of Groningen, Groningen, 2008).
    Abstract
    In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.
    Export
  • Martin Bokeloh, Alexander Berner, Michael Wand, Hans-Peter Seidel, and Andreas Schilling, "Slippage Features", in WSI, (2008), Vol. 2008-03, pp. 17 p.  
    Citation
    Martin Bokeloh, Alexander Berner, Michael Wand, Hans-Peter Seidel, and Andreas Schilling, "Slippage Features", in WSI, (2008), Vol. 2008-03, pp. 17 p.
    Export
  • Gerard de Melo, Fabian Suchanek, and Adam Pease, "Integrating Yago into the suggested upper merged ontology", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 31 p.  
    Citation
    Gerard de Melo, Fabian Suchanek, and Adam Pease, "Integrating Yago into the suggested upper merged ontology", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 31 p.
    Abstract
    Ontologies are becoming more and more popular as background knowledge for intelligent applications. Up to now, there has been a schism between manually assembled, highly axiomatic ontologies and large, automatically constructed knowledge bases. This report discusses how the two worlds can be brought together by combining the high-level axiomatizations from the Standard Upper Merged Ontology (SUMO) with the extensive world knowledge of the YAGO ontology. On the theoretical side, it analyses the differences between the knowledge representation in YAGO and SUMO. On the practical side, this report explains how the two resources can be merged. This yields a new large-scale formal ontology, which provides information about millions of entities such as people, cities, organizations, and companies. This report is the detailed version of our paper at ICTAI 2008.
    Export
  • Arnaud Luc Fietzke and Christoph Weidenbach, "Labelled splitting", in Research Report, (2008), pp. 45 p.  
    Citation
    Arnaud Luc Fietzke and Christoph Weidenbach, "Labelled splitting", in Research Report, (2008), pp. 45 p.
    Abstract
    We define a superposition calculus with explicit splitting and an explicit, new backtracking rule on the basis of labelled clauses. For the first time we show a superposition calculus with explicit backtracking rule sound and complete. The new backtracking rule advances backtracking with branch condensing known from SPASS. An experimental evaluation of an implementation of the new rule shows that it improves considerably the previous SPASS splitting implementation. Finally, we discuss the relationship between labelled first-order splitting and DPLL style splitting with intelligent backtracking and clause learning.
    Export
  • Gjergji Kasneci, Maya Ramanath, Mauro Sozio, Fabian Suchanek, and Gerhard Weikum, "STAR: Steiner tree approximation in relationship-graphs", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 37 p.  
    Citation
    Gjergji Kasneci, Maya Ramanath, Mauro Sozio, Fabian Suchanek, and Gerhard Weikum, "STAR: Steiner tree approximation in relationship-graphs", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 37 p.
    Abstract
    Large-scale graphs and networks are abundant in modern information systems: entity-relationship graphs over relational data or Web-extracted entities, biological networks, social online communities, knowledge bases, and many more. Often such data comes with expressive node and edge labels that allow an interpretation as a semantic graph, and edge weights that reflect the strengths of semantic relations between entities. Finding close relationships between a given set of two, three, or more entities is an important building block for many search, ranking, and analysis tasks. From an algorithmic point of view, this translates into computing the best Steiner trees between the given nodes, a classical NP-hard problem. In this paper, we present a new approximation algorithm, coined STAR, for relationship queries over large graphs that do not fit into memory. We prove that for n query entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in practical cases the results returned by STAR are qualitatively better than the results returned by a classical 2-approximation algorithm. We then describe an extension to our algorithm to return the top-k Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the best state-of-the returns qualitatively better results.
    Export
  • Thomas Neumann and Guido Moerkotte, "Single phase construction of optimal DAG-structured QEPs", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 73 p.  
    Citation
    Thomas Neumann and Guido Moerkotte, "Single phase construction of optimal DAG-structured QEPs", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 73 p.
    Abstract
    Traditionally, database management systems use tree-structured query evaluation plans. They are easy to implement but not expressive enough for some optimizations like eliminating common algebraic subexpressions or magic sets. These require directed acyclic graphs (DAGs), i.e. shared subplans. Existing approaches consider DAGs merely for special cases and not in full generality. We introduce a novel framework to reason about sharing of subplans and, thus, DAG-structured query evaluation plans. Then, we present the first plan generator capable of generating optimal DAG-structured query evaluation plans. The experimental results show that with no or only a modest increase of plan generation time, a major reduction of query execution time can be achieved for common queries.
    Export
  • Thomas Schultz, Holger Theisel, and Hans-Peter Seidel, "Crease surfaces: from theory to extraction and application to diffusion tensor MRI", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 33 p.  
    Citation
    Thomas Schultz, Holger Theisel, and Hans-Peter Seidel, "Crease surfaces: from theory to extraction and application to diffusion tensor MRI", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 33 p.
    Abstract
    Crease surfaces are two-dimensional manifolds along which a scalar field assumes a local maximum (ridge) or a local minimum (valley) in a constrained space. Unlike isosurfaces, they are able to capture extremal structures in the data. Creases have a long tradition in image processing and computer vision, and have recently become a popular tool for visualization. When extracting crease surfaces, degeneracies of the Hessian (i.e., lines along which two eigenvalues are equal), have so far been ignored. We show that these loci, however, have two important consequences for the topology of crease surfaces: First, creases are bounded not only by a side constraint on eigenvalue sign, but also by Hessian degeneracies. Second, crease surfaces are not in general orientable. We describe an efficient algorithm for the extraction of crease surfaces which takes these insights into account and demonstrate that it produces more accurate results than previous approaches. Finally, we show that DT-MRI streamsurfaces, which were previously used for the analysis of planar regions in diffusion tensor MRI data, are mathematically ill-defined. As an example application of our method, creases in a measure of planarity are presented as a viable substitute.
    Export
  • Viorica Sofronie-Stokkermans, "Efficient Hierarchical Reasoning about Functions over Numerical Domains", in AVACS Technical Report, (2008), pp. 17 p.  
    Citation
    Viorica Sofronie-Stokkermans, "Efficient Hierarchical Reasoning about Functions over Numerical Domains", in AVACS Technical Report, (2008), pp. 17 p.
    Abstract
    We show that many properties studied in mathematical analysis (monotonicity, boundedness, inverse, Lipschitz properties, possibly combined with continuity or derivability) are expressible by formulae in a class for which sound and complete hierarchical proof methods for testing satisfiability of sets of ground clauses exist. The results are useful for automated reasoning in mathematical analysis and for the verification of hybrid systems.
    Export
  • Viorica Sofronie-Stokkermans, "Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems", in AVACS Technical Report, (2008).  
    Citation
    Viorica Sofronie-Stokkermans, "Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems", in AVACS Technical Report, (2008).
    Abstract
    In this paper we show that states, transitions and behavior of concurrent systems can often be modeled as sheaves over a suitable topological space (where the topology expresses how the interacting systems share the information). This allows us to use results from categorical logic (and in particular geometric logic) to describe which type of properties are transferred, if valid locally in all component systems, also at a global level, to the system obtained by interconnecting the individual systems. The main area of application is to modular verification of complex systems. We illustrate the ideas by means of an example involving a family of interacting controllers for trains on a rail track.
    Export
  • Fabian Suchanek, Mauro Sozio, and Gerhard Weikum, "SOFIE: A Self-Organizing Framework for Information Extraction", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 49 p.  
    Citation
    Fabian Suchanek, Mauro Sozio, and Gerhard Weikum, "SOFIE: A Self-Organizing Framework for Information Extraction", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 49 p.
    Abstract
    This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers near-perfect output, even from unstructured Internet documents.
    Export
  • Danyi Wang, Alexander Belyaev, Waqar Saleem, and Hans-Peter Seidel, "Shape Complexity from Image Similarity", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 28 p.  
    Citation
    Danyi Wang, Alexander Belyaev, Waqar Saleem, and Hans-Peter Seidel, "Shape Complexity from Image Similarity", in Research Report / Max-Planck-Institut für Informatik, (2008), pp. 28 p.
    Abstract
    We present an approach to automatically compute the complexity of a given 3D shape. Previous approaches have made use of geometric and/or topological properties of the 3D shape to compute complexity. Our approach is based on shape appearance and estimates the complexity of a given 3D shape according to how 2D views of the shape diverge from each other. We use similarity among views of the 3D shape as the basis for our complexity computation. Hence our approach uses claims from psychology that humans mentally represent 3D shapes as organizations of 2D views and, therefore, mimics how humans gauge shape complexity. Experimental results show that our approach produces results that are more in agreement with the human notion of shape complexity than those obtained using previous approaches.
    Export
  • MPI for Informatics, Max Planck Society (ed), "Eight Biennial Report: April 2005 – March 2007", in Biennial Report / Max-Planck-Institut für Informatik, (2007), Vol. 8, pp. 535 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Eight Biennial Report: April 2005 – March 2007", in Biennial Report / Max-Planck-Institut für Informatik, (2007), Vol. 8, pp. 535 p.
    Export
  • Ernst Althaus and Stefan Canzar, "A Lagrangian relaxation approach for the multiple sequence alignment problem", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 41 p.  
    Citation
    Ernst Althaus and Stefan Canzar, "A Lagrangian relaxation approach for the multiple sequence alignment problem", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 41 p.
    Abstract
    We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem.
    Export
  • Robert Bargmann, Volker Blanz, and Hans-Peter Seidel, "A nonlinear viseme model for triphone-based speech synthesis", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 28 p.  
    Citation
    Robert Bargmann, Volker Blanz, and Hans-Peter Seidel, "A nonlinear viseme model for triphone-based speech synthesis", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 28 p.
    Abstract
    This paper presents a representation of visemes that defines a measure of similarity between different visemes, and a system of viseme categories. The representation is derived from a statistical data analysis of feature points on 3D scans, using Locally Linear Embedding (LLE). The similarity measure determines which available viseme and triphones to use to synthesize 3D face animation for a novel audio file. From a corpus of dynamic recorded 3D mouth articulation data, our system is able to find the best suited sequence of triphones over which to interpolate while reusing the coarticulation information to obtain correct mouth movements over time. Due to the similarity measure, the system can deal with relatively small triphone databases and find the most appropriate candidates. With the selected sequence of database triphones, we can finally morph along the successive triphones to produce the final articulation animation. In an entirely data-driven approach, our automated procedure for defining viseme categories reproduces the groups of related visemes that are defined in the phonetics literature.
    Export
  • Eric Berberich, Efi Fogel, Dan Halperin, Kurt Mehlhorn, and Ron Wein, "Sweeping and maintaining two-dimensional arrangements on quadrics", in ACS Technical Reports, (2007), pp. 10 p.  
    Citation
    Eric Berberich, Efi Fogel, Dan Halperin, Kurt Mehlhorn, and Ron Wein, "Sweeping and maintaining two-dimensional arrangements on quadrics", in ACS Technical Reports, (2007), pp. 10 p.
    Export
  • Eric Berberich, Efi Fogel, and Andreas Meyer, "Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves", in ACS Technical Reports, (2007), pp. 5 p.  
    Citation
    Eric Berberich, Efi Fogel, and Andreas Meyer, "Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves", in ACS Technical Reports, (2007), pp. 5 p.
    Export
  • Eric Berberich and Michael Hemmer, "Definition of the 3D Quadrical Kernel Content", in ACS Technical Reports, (2007), pp. 25 p.  
    Citation
    Eric Berberich and Michael Hemmer, "Definition of the 3D Quadrical Kernel Content", in ACS Technical Reports, (2007), pp. 25 p.
    Export
  • Eric Berberich, Manuel Caroli, and Nicola Wolpert, "Exact Computation of Arrangements of Rotated Conics", in ACS Technical Reports, (2007), pp. 5 p.  
    Citation
    Eric Berberich, Manuel Caroli, and Nicola Wolpert, "Exact Computation of Arrangements of Rotated Conics", in ACS Technical Reports, (2007), pp. 5 p.
    Export
  • Eric Berberich and Lutz Kettner, "Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point", in Research Report, (2007), pp. 20 p.  
    Citation
    Eric Berberich and Lutz Kettner, "Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point", in Research Report, (2007), pp. 20 p.
    Export
  • Eric Berberich and Michal Meyerovitch, "Computing Envelopes of Quadrics", in ACS Technical Reports, (2007), pp. 5 p.  
    Citation
    Eric Berberich and Michal Meyerovitch, "Computing Envelopes of Quadrics", in ACS Technical Reports, (2007), pp. 5 p.
    Export
  • Eric Berberich, Michael Hemmer, Menelaos I. Karavelas, and Monique Teillaud, "Revision of interface specification of algebraic kernel", in ACS Technical Reports, (2007), pp. 100 p.  
    Citation
    Eric Berberich, Michael Hemmer, Menelaos I. Karavelas, and Monique Teillaud, "Revision of interface specification of algebraic kernel", in ACS Technical Reports, (2007), pp. 100 p.
    Export
  • Klaus Berberich, Srikanta Bedathur, Thomas Neumann, and Gerhard Weikum, "A Time Machine for Text Search", in Research Report, (2007), pp. 39 p.  
    Citation
    Klaus Berberich, Srikanta Bedathur, Thomas Neumann, and Gerhard Weikum, "A Time Machine for Text Search", in Research Report, (2007), pp. 39 p.
    Export
  • Christopher Dyken, Gernot Ziegler, Christian Theobalt, and Hans-Peter Seidel, "HistoPyramids in Iso-Surface Extraction", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 16 p.  
    Citation
    Christopher Dyken, Gernot Ziegler, Christian Theobalt, and Hans-Peter Seidel, "HistoPyramids in Iso-Surface Extraction", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 16 p.
    Abstract
    We present an implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0 and 4.0 graphics hardware. Our approach is based on the interpretation of Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data structure previously only used in GPU data compaction. We extend the HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the GPU, even on Shader Model 3.0 hardware. Currently, our algorithm outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance analysis on several generations of graphics hardware.
    Export
  • Arno Eigenwillig, Lutz Kettner, and Nicola Wolpert, "Snap Rounding of Bézier Curves", in Research Report, (2007), pp. 19 p.  
    Citation
    Arno Eigenwillig, Lutz Kettner, and Nicola Wolpert, "Snap Rounding of Bézier Curves", in Research Report, (2007), pp. 19 p.
    Export
  • Jürgen Gall, Bodo Rosenhahn, and Hans-Peter Seidel, "Clustered stochastic optimization for object recognition and pose estimation", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 23 p.  
    Citation
    Jürgen Gall, Bodo Rosenhahn, and Hans-Peter Seidel, "Clustered stochastic optimization for object recognition and pose estimation", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 23 p.
    Abstract
    We present an approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D images. The pose estimation without initial information is a challenging optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based tracking algorithms. Our method is able to recognize the correct object in the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global optimization method that converges to the global minimum similar to simulated annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and migrate to the most attractive cluster as the time increases. The performance of our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our experiments include rigid bodies and full human bodies.
    Export
  • Jürgen Gall, Thomas Brox, Bodo Rosenhahn, and Hans-Peter Seidel, "Global stochastic optimization for robust and accurate human motion capture", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 28 p.  
    Citation
    Jürgen Gall, Thomas Brox, Bodo Rosenhahn, and Hans-Peter Seidel, "Global stochastic optimization for robust and accurate human motion capture", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 28 p.
    Abstract
    Tracking of human motion in video is usually tackled either by local optimization or filtering approaches. While local optimization offers accurate estimates but often looses track due to local optima, particle filtering can recover from errors at the expense of a poor accuracy due to overestimation of noise. In this paper, we propose to embed global stochastic optimization in a tracking framework. This new optimization technique exhibits both the robustness of filtering strategies and a remarkable accuracy. We apply the optimization to an energy function that relies on silhouettes and color, as well as some prior information on physical constraints. This framework provides a general solution to markerless human motion capture since neither excessive preprocessing nor strong assumptions except of a 3D model are required. The optimization provides initialization and accurate tracking even in case of low contrast and challenging illumination. Our experimental evaluation demonstrates the large improvements obtained with this technique. It comprises a quantitative error analysis comparing the approach with local optimization, particle filtering, and a heuristic based on particle filtering.
    Export
  • Jürgen Gall, Jürgen Potthoff, Christoph Schnörr, Bodo Rosenhahn, and Hans-Peter Seidel, "Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications", in Research Report, (2007).  
    Citation
    Jürgen Gall, Jürgen Potthoff, Christoph Schnörr, Bodo Rosenhahn, and Hans-Peter Seidel, "Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications", in Research Report, (2007).
    Abstract
    Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.
    Export
  • Anders Gidenstam and Marina Papatriantafilou, "LFthreads: a lock-free thread library", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 36 p.  
    Citation
    Anders Gidenstam and Marina Papatriantafilou, "LFthreads: a lock-free thread library", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 36 p.
    Abstract
    This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support.
    Export
  • Robert Herzog, Vlastimil Havran, Shinichi Kinuwaki, Karol Myszkowski, and Hans-Peter Seidel, "Global Illumination using Photon Ray Splatting", in Research Report, (2007), pp. 66 p.  
    Citation
    Robert Herzog, Vlastimil Havran, Shinichi Kinuwaki, Karol Myszkowski, and Hans-Peter Seidel, "Global Illumination using Photon Ray Splatting", in Research Report, (2007), pp. 66 p.
    Abstract
    We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.
    Export
  • Thomas Hillenbrand and Christoph Weidenbach, "Superposition for Finite Domains", in Max-Planck-Institut für Informatik / Research Report, (2007), pp. 25 p.  
    Citation
    Thomas Hillenbrand and Christoph Weidenbach, "Superposition for Finite Domains", in Max-Planck-Institut für Informatik / Research Report, (2007), pp. 25 p.
    Export
  • Philipp Jenke, Michael Wand, and Wolfgang Strasser, "Efficient Surface Reconstruction for Piecewise Smooth Objects", in WSI, (2007), Vol. 2007-05, pp. 17 p.  
    Citation
    Philipp Jenke, Michael Wand, and Wolfgang Strasser, "Efficient Surface Reconstruction for Piecewise Smooth Objects", in WSI, (2007), Vol. 2007-05, pp. 17 p.
    Export
  • Gjergji Kasneci, Fabian M. Suchanek, Georgiana Ifrim, Maya Ramanath, and Gerhard Weikum, "NAGA: Searching and Ranking Knowledge", in Research Report, (2007), pp. 42 p.  
    Citation
    Gjergji Kasneci, Fabian M. Suchanek, Georgiana Ifrim, Maya Ramanath, and Gerhard Weikum, "NAGA: Searching and Ranking Knowledge", in Research Report, (2007), pp. 42 p.
    Abstract
    The Web has the potential to become the world's largest knowledge base. In order to unleash this potential, the wealth of information available on the web needs to be extracted and organized. There is a need for new querying techniques that are simple yet more expressive than those provided by standard keyword-based search engines. Search for knowledge rather than Web pages needs to consider inherent semantic structures like entities (person, organization, etc.) and relationships (isA,locatedIn, etc.). In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s knowledge base, which is organized as a graph with typed edges, consists of millions of entities and relationships automatically extracted fromWeb-based corpora. A query language capable of expressing keyword search for the casual user as well as graph queries with regular expressions for the expert, enables the formulation of queries with additional semantic information. We introduce a novel scoring model, based on the principles of generative language models, which formalizes several notions like confidence, informativeness and compactness and uses them to rank query results. We demonstrate {NAGA}'s superior result quality over current search engines by conducting a comprehensive evaluation, including user assessments, for advanced queries.
    Export
  • Torsten Langer and Hans-Peter Seidel, "Construction of smooth maps with mean value coordinates", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 22 p.  
    Citation
    Torsten Langer and Hans-Peter Seidel, "Construction of smooth maps with mean value coordinates", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 22 p.
    Abstract
    Bernstein polynomials are a classical tool in Computer Aided Design to create smooth maps with a high degree of local control. They are used for the construction of B\'ezier surfaces, free-form deformations, and many other applications. However, classical Bernstein polynomials are only defined for simplices and parallelepipeds. These can in general not directly capture the shape of arbitrary objects. Instead, a tessellation of the desired domain has to be done first. We construct smooth maps on arbitrary sets of polytopes such that the restriction to each of the polytopes is a Bernstein polynomial in mean value coordinates (or any other generalized barycentric coordinates). In particular, we show how smooth transitions between different domain polytopes can be ensured.
    Export
  • Andreas Podelski and Silke Wagner, "A method and a tool for automatic veriication of region stability for hybrid systems", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 46 p.  
    Citation
    Andreas Podelski and Silke Wagner, "A method and a tool for automatic veriication of region stability for hybrid systems", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 46 p.
    Abstract
    We propose a model checking method and tool that integrates state abstraction techniques for the automatic proof of a stability property for hybrid systems called \emph{region stability}. It is based on a new notion of \emph{snapshots} which yield characteristic discretizations of trajectories. We have implemented the tool and applied it to solve a number of verification problems, including the fully automatic stability proof for the break curve behavior of a train system.
    Export
  • Carsten Stoll, Edilson de Aguiar, Christian Theobalt, and Hans-Peter Seidel, "A volumetric approach to interactive shape editing", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 28 p.  
    Citation
    Carsten Stoll, Edilson de Aguiar, Christian Theobalt, and Hans-Peter Seidel, "A volumetric approach to interactive shape editing", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 28 p.
    Abstract
    We present a novel approach to real-time shape editing that produces physically plausible deformations using an efficient and easy-to-implement volumetric approach. Our algorithm alternates between a linear tetrahedral Laplacian deformation step and a differential update in which rotational transformations are approximated. By means of this iterative process we can achieve non-linear deformation results while having to solve only linear equation systems. The differential update step relies on estimating the rotational component of the deformation relative to the rest pose. This makes the method very stable as the shape can be reverted to its rest pose even after extreme deformations. Only a few point handles or area handles imposing an orientation are needed to achieve high quality deformations, which makes the approach intuitive to use. We show that our technique is well suited for interactive shape manipulation and also provides an elegant way to animate models with captured motion data.
    Export
  • Fabian Suchanek, Gjergji Kasneci, and Gerhard Weikum, "Yago: a large ontology from Wikipedia and WordNet", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 67 p.  
    Citation
    Fabian Suchanek, Gjergji Kasneci, and Gerhard Weikum, "Yago: a large ontology from Wikipedia and WordNet", in Research Report / Max-Planck-Institut für Informatik, (2007), pp. 67 p.
    Abstract
    This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95% -- as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.
    Export
  • Irene Albrecht, Michael Kipp, Michael Paul Neff, and Hans-Peter Seidel, "Gesture modeling and animation by imitation", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 62 p.  
    Citation
    Irene Albrecht, Michael Kipp, Michael Paul Neff, and Hans-Peter Seidel, "Gesture modeling and animation by imitation", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 62 p.
    Abstract
    Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very difficult to generate, even more so when a unique, individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which a statistical model of the person.s particular gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with more detail and prepares a refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The system is capable of creating animation that replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style and creating performances of a given text sample in the style of different performers.
    Export
  • Ralitsa Angelova and Stefan Siersdorfer, "A neighborhood-based approach for clustering of linked document collections", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 32 p.  
    Citation
    Ralitsa Angelova and Stefan Siersdorfer, "A neighborhood-based approach for clustering of linked document collections", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 32 p.
    Abstract
    This technical report addresses the problem of automatically structuring linked document collections by using clustering. In contrast to traditional clustering, we study the clustering problem in the light of available link structure information for the data set (e.g., hyperlinks among web documents or co-authorship among bibliographic data entries). Our approach is based on iterative relaxation of cluster assignments, and can be built on top of any clustering algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity, better overall accuracy, and make self-organization more robust. Our comprehensive experiments on three different real-world corpora demonstrate the benefits of our approach.
    Export
  • Holger Bast, Ingmar Weber, and Christian Worm Mortensen, "Output-sensitive autocompletion search", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 17 p.  
    Citation
    Holger Bast, Ingmar Weber, and Christian Worm Mortensen, "Output-sensitive autocompletion search", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 17 p.
    Abstract
    We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.
    Export
  • Holger Bast, Debapriyo Majumdar, Ralf Schenkel, Christian Theobalt, and Gerhard Weikum, "IO-Top-k: index-access optimized top-k query processing", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 49 p.  
    Citation
    Holger Bast, Debapriyo Majumdar, Ralf Schenkel, Christian Theobalt, and Gerhard Weikum, "IO-Top-k: index-access optimized top-k query processing", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 49 p.
    Abstract
    Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms.
    Export
  • Alexander Belyaev, Shin Yoshizawa, and Hans-Peter Seidel, "Skeleton-driven Laplacian Mesh Deformations", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 37 p.  
    Citation
    Alexander Belyaev, Shin Yoshizawa, and Hans-Peter Seidel, "Skeleton-driven Laplacian Mesh Deformations", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 37 p.
    Abstract
    In this report, a new free-form shape deformation approach is proposed. We combine a skeleton-driven mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-driven deformations. We also develop a new mesh evolution technique which allow us to eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we present a multiresolution version of our approach in order to simplify and accelerate the deformation process.
    Export
  • Alexander Belyaev, Torsten Langer, and Hans-Peter Seidel, "Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 19 p.  
    Citation
    Alexander Belyaev, Torsten Langer, and Hans-Peter Seidel, "Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 19 p.
    Abstract
    Since their introduction, mean value coordinates enjoy ever increasing popularity in computer graphics and computational mathematics because they exhibit a variety of good properties. Most importantly, they are defined in the whole plane which allows interpolation and extrapolation without restrictions. Recently, mean value coordinates were generalized to spheres and to $\mathbb{R}^{3}$. We show that these spherical and 3D mean value coordinates are well-defined on the whole sphere and the whole space $\mathbb{R}^{3}$, respectively.
    Export
  • Matthias Bender, Sebastian Michel, Gerhard Weikum, and Peter Triantafilou, "Overlap-aware global df estimation in distributed information retrieval systems", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 25 p.  
    Citation
    Matthias Bender, Sebastian Michel, Gerhard Weikum, and Peter Triantafilou, "Overlap-aware global df estimation in distributed information retrieval systems", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 25 p.
    Abstract
    Peer-to-Peer (P2P) search engines and other forms of distributed information retrieval (IR) are gaining momentum. Unlike in centralized IR, it is difficult and expensive to compute statistical measures about the entire document collection as it is widely distributed across many computers in a highly dynamic network. On the other hand, such network-wide statistics, most notably, global document frequencies of the individual terms, would be highly beneficial for ranking global search results that are compiled from different peers. This paper develops an efficient and scalable method for estimating global document frequencies in a large-scale, highly dynamic P2P network with autonomous peers. The main difficulty that is addressed in this paper is that the local collections of different peers may arbitrarily overlap, as many peers may choose to gather popular documents that fall into their specific interest profile. Our method is based on hash sketches as an underlying technique for compact data synopses, and exploits specific properties of hash sketches for duplicate elimination in the counting process. We report on experiments with real Web data that demonstrate the accuracy of our estimation method and also the benefit for better search result ranking.
    Export
  • Eric Berberich, Franziska Ebert, Efi Fogel, and Lutz Kettner, "Web-site with Benchmark Instances for Planar Curve Arrangements", (University of Groningen, Groningen, The Netherlands, 2006).  
    Citation
    Eric Berberich, Franziska Ebert, Efi Fogel, and Lutz Kettner, "Web-site with Benchmark Instances for Planar Curve Arrangements", (University of Groningen, Groningen, The Netherlands, 2006).
    Export
  • Eric Berberich, Franziska Ebert, and Lutz Kettner, "Definition of File Format for Benchmark Instances for Arrangements of Quadrics", (University of Groningen, Groningen, The Netherlands, 2006).  
    Citation
    Eric Berberich, Franziska Ebert, and Lutz Kettner, "Definition of File Format for Benchmark Instances for Arrangements of Quadrics", (University of Groningen, Groningen, The Netherlands, 2006).
    Export
  • Edilson de Aguiar, Rhaleb Zayer, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel, "A framework for natural animation of digitized models", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 27 p.  
    Citation
    Edilson de Aguiar, Rhaleb Zayer, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel, "A framework for natural animation of digitized models", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 27 p.
    Abstract
    We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.
    Export
  • Benjamin Doerr and Michael Gnewuch, "Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding", (University Kiel, Kiel, 2006).  
    Citation
    Benjamin Doerr and Michael Gnewuch, "Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding", (University Kiel, Kiel, 2006).
    Abstract
    We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.
    Export
  • Alexander Efremov, Rafal Mantiuk, Karol Myszkowski, and Hans-Peter Seidel, "Design and evaluation of backward compatible high dynamic range video compression", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 50 p.  
    Citation
    Alexander Efremov, Rafal Mantiuk, Karol Myszkowski, and Hans-Peter Seidel, "Design and evaluation of backward compatible high dynamic range video compression", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 50 p.
    Abstract
    In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices.
    Export
  • Khaled Elbassioni, "On the Complexity of Monotone Boolean Duality Testing", (DIMACS, Piscataway, NJ, 2006).  
    Citation
    Khaled Elbassioni, "On the Complexity of Monotone Boolean Duality Testing", (DIMACS, Piscataway, NJ, 2006).
    Abstract
    We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.
    Export
  • Stefan Funke, Sören Laue, Rouven Naujoks, and Lotker Zvi, "Power assignment problems in wireless communication", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 25 p.  
    Citation
    Stefan Funke, Sören Laue, Rouven Naujoks, and Lotker Zvi, "Power assignment problems in wireless communication", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 25 p.
    Abstract
    A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts.
    Export
  • Stefan Funke, Christian Klein, Kurt Mehlhorn, and Susanne Schmitt, "Controlled Perturbation for Delaunay Triangulations", (Algorithms for Complex Shapes with certified topology and numerics, Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS, 2006).  
    Citation
    Stefan Funke, Christian Klein, Kurt Mehlhorn, and Susanne Schmitt, "Controlled Perturbation for Delaunay Triangulations", (Algorithms for Complex Shapes with certified topology and numerics, Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS, 2006).
    Export
  • Vlastimil Havran, Robert Herzog, and Hans-Peter Seidel, "On fast construction of spatial hierarchies for ray tracing", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 40 p.  
    Citation
    Vlastimil Havran, Robert Herzog, and Hans-Peter Seidel, "On fast construction of spatial hierarchies for ray tracing", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 40 p.
    Abstract
    In this paper we address the problem of fast construction of spatial hierarchies for ray tracing with applications in animated environments including non-rigid animations. We discuss properties of currently used techniques with $O(N \log N)$ construction time for kd-trees and bounding volume hierarchies. Further, we propose a hybrid data structure blending between a spatial kd-tree and bounding volume primitives. We keep our novel hierarchical data structures algorithmically efficient and comparable with kd-trees by the use of a cost model based on surface area heuristics. Although the time complexity $O(N \log N)$ is a lower bound required for construction of any spatial hierarchy that corresponds to sorting based on comparisons, using approximate method based on discretization we propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.
    Export
  • Gjergji Kasneci, Fabian Suchanek, and Gerhard Weikum, "Yago - a core of semantic knowledge", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 39 p.  
    Citation
    Gjergji Kasneci, Fabian Suchanek, and Gerhard Weikum, "Yago - a core of semantic knowledge", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 39 p.
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}). The facts have been automatically extracted from the unification of Wikipedia and WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships -- and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
    Export
  • Michael Kerber, "Division-free computation of subresultants using bezout matrices", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 20 p.  
    Citation
    Michael Kerber, "Division-free computation of subresultants using bezout matrices", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 20 p.
    Abstract
    We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
    Export
  • Julia Luxenburger and Gerhard Weikum, "Exploiting Community Behavior for Enhanced Link Analysis and Web Search", (University of Paderborn, Heinz Nixdorf Institute, Paderborn, Germany, 2006).  
    Citation
    Julia Luxenburger and Gerhard Weikum, "Exploiting Community Behavior for Enhanced Link Analysis and Web Search", (University of Paderborn, Heinz Nixdorf Institute, Paderborn, Germany, 2006).
    Abstract
    Methods for Web link analysis and authority ranking such as PageRank are based on the assumption that a user endorses a Web page when creating a hyperlink to this page. There is a wealth of additional user-behavior information that could be considered for improving authority analysis, for example, the history of queries that a user community posed to a search engine over an extended time period, or observations about which query-result pages were clicked on and which ones were not clicked on after a user saw the summary snippets of the top-10 results. This paper enhances link analysis methods by incorporating additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the user demand or is even perceived as spam. Our methods use various novel forms of advanced Markov models whose states correspond to users and queries in addition to Web pages and whose links also reflect the relationships derived from query-result clicks, query refinements, and explicit ratings. Preliminary experiments are presented as a proof of concept.
    Export
  • Oliver Schall, Alexander Belyaev, and Hans-Peter Seidel, "Feature-preserving non-local denoising of static and time-varying range data", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 22 p.  
    Citation
    Oliver Schall, Alexander Belyaev, and Hans-Peter Seidel, "Feature-preserving non-local denoising of static and time-varying range data", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 22 p.
    Abstract
    We present a novel algorithm for accurately denoising static and time-varying range data. Our approach is inspired by similarity-based non-local image filtering. We show that our proposed method is easy to implement and outperforms recent state-of-the-art filtering approaches. Furthermore, it preserves fine shape features and produces an accurate smoothing result in the spatial and along the time domain.
    Export
  • Volker Scholz and Marcus A. Magnor, "Garment texture editing in monocular video sequences based on color-coded printing patterns", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 24 p.  
    Citation
    Volker Scholz and Marcus A. Magnor, "Garment texture editing in monocular video sequences based on color-coded printing patterns", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 24 p.
    Abstract
    In this report we present a video processing algorithm for texture replacement of moving garments. We use a color-coded pattern which encodes texture coordinates within a local neighborhood in order to robustly determine the geometric deformation of the fabric. A time-coherent texture interpolation is obtained by the use of 3D radial basis functions. Shading maps are determined with a technique operating in the image gradient domain. We use the recovered shading maps to realistically apply new textures to the color-coded garment in the video sequence. Our method enables editing and even completely exchanging fabric pattern designs of garments worn by actors as a video post-processing step.
    Export
  • Fabian Suchanek, Georgiana Ifrim, and Gerhard Weikum, "Combining linguistic and statistical analysis to extract relations from web documents", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 37 p.  
    Citation
    Fabian Suchanek, Georgiana Ifrim, and Gerhard Weikum, "Combining linguistic and statistical analysis to extract relations from web documents", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 37 p.
    Abstract
    Search engines, question answering systems and classification systems alike can greatly profit from formalized world knowledge. Unfortunately, manually compiled collections of world knowledge (such as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer from low coverage, high assembling costs and fast aging. In contrast, the World Wide Web provides an endless source of knowledge, assembled by millions of people, updated constantly and available for free. In this paper, we propose a novel method for learning arbitrary binary relations from natural language Web documents, without human interaction. Our system, LEILA, combines linguistic analysis and machine learning techniques to find robust patterns in the text and to generalize them. For initialization, we only require a set of examples of the target relation and a set of counterexamples (e.g. from WordNet). The architecture consists of 3 stages: Finding patterns in the corpus based on the given examples, assessing the patterns based on probabilistic confidence, and applying the generalized patterns to propose pairs for the target relation. We prove the benefits and practical viability of our approach by extensive experiments, showing that LEILA achieves consistent improvements over existing comparable techniques (e.g. Snowball, TextToOnto).
    Export
  • Christian Theobalt, Naveed Ahmed, Hendrik P. A. Lensch, Marcus A. Magnor, and Hans-Peter Seidel, "Enhanced dynamic reflectometry for relightable free-viewpoint video", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 37 p.  
    Citation
    Christian Theobalt, Naveed Ahmed, Hendrik P. A. Lensch, Marcus A. Magnor, and Hans-Peter Seidel, "Enhanced dynamic reflectometry for relightable free-viewpoint video", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 37 p.
    Abstract
    Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware.
    Export
  • Thomas Wies, Viktor Kuncak, Karen Zee, Andreas Podelski, and Martin Rinard, "On verifying complex properties using symbolic shape analysis", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 32 p.  
    Citation
    Thomas Wies, Viktor Kuncak, Karen Zee, Andreas Podelski, and Martin Rinard, "On verifying complex properties using symbolic shape analysis", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 32 p.
    Abstract
    One of the main challenges in the verification of software systems is the analysis of unbounded data structures with dynamic memory allocation such as linked data structures and arrays. We describe Bohne a new analysis for verifying data structures. Bohne verifies data structure operations and shows that 1) the operations preserve data structure invariants and 2) the operations satisfy their specifications expressed in terms of changes to the set of objects stored in the data structure. During the analysis Bohne infers loop invariants in the form of disjunctions of universally quantified Boolean combinations of formulas represented as sets of binary decision diagrams. To synthesize loop invariants of this form Bohne uses a combination of decision procedures for Monadic Second-Order Logic over trees SMT-LIB decision procedures (currently CVC Lite) and an automated reasoner within the Isabelle interactive theorem prover. This architecture shows that synthesized loop invariants can serve as a useful communication mechanism between different decision procedures. In addition Bohne uses field constraint analysis a combination mechanism that enables the use of uninterpreted function symbols within formulas of Monadic Second-Order Logic over trees. Using Bohne we have verified operations on data structures such as linked lists with iterators and back pointers trees with and without parent pointers two-level skip lists array data structures and sorted lists. We have deployed Bohne in the Hob and Jahob data structure analysis systems enabling us to combine Bohne with analyses of data structure clients and apply it in the context of larger programs. This report describes the Bohne algorithm as well as techniques that Bohne uses to reduce the ammount of annotations and the running time of the analysis.
    Export
  • Gernot Ziegler, Art Tevs, Christian Theobalt, and Hans-Peter Seidel, "GPU point list generation through histogram pyramids", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 13 p.  
    Citation
    Gernot Ziegler, Art Tevs, Christian Theobalt, and Hans-Peter Seidel, "GPU point list generation through histogram pyramids", in Research Report / Max-Planck-Institut für Informatik, (2006), pp. 13 p.
    Abstract
    Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a special version of image pyramid, sums up the number of active entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to convert a sparse matrix into a coordinate list of active cell entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active entries in O(N) + M (log N) steps, despite the restricted graphics hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D volumes to particle clouds and sparse matrix compression.
    Export
  • MPI for Informatics, Max Planck Society (ed), "Seventh Biennial Report: June 2003 - March 2005", in Biennial Report / Max-Planck-Institut für Informatik, (2005), Vol. 7, pp. 526 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Seventh Biennial Report: June 2003 - March 2005", in Biennial Report / Max-Planck-Institut für Informatik, (2005), Vol. 7, pp. 526 p.
    Export
  • Surender Baswana and Kavitha Telikepalli, "Improved algorithms for all-pairs approximate shortest paths in weighted graphs", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 26 p.  
    Citation
    Surender Baswana and Kavitha Telikepalli, "Improved algorithms for all-pairs approximate shortest paths in weighted graphs", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 26 p.
    Abstract
    The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem.
    Export
  • Hans de Nivelle, "Using resolution as a decision procedure", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 53 p.  
    Citation
    Hans de Nivelle, "Using resolution as a decision procedure", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 53 p.
    Export
  • Roman Dementiev, Lutz Kettner, and Peter Sanders, "STXXL: Standard Template Library for XXL Data Sets", (Fakultät für Informatik, University of Karlsruhe, Karlsruhe, Germany, 2005).  
    Citation
    Roman Dementiev, Lutz Kettner, and Peter Sanders, "STXXL: Standard Template Library for XXL Data Sets", (Fakultät für Informatik, University of Karlsruhe, Karlsruhe, Germany, 2005).
    Export
  • Christian Fuchs, Michael Gösele, Tongbo Chen, and Hans-Peter Seidel, "An emperical model for heterogeneous translucent objects", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 20 p.  
    Citation
    Christian Fuchs, Michael Gösele, Tongbo Chen, and Hans-Peter Seidel, "An emperical model for heterogeneous translucent objects", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 20 p.
    Abstract
    We introduce an empirical model for multiple scattering in heterogeneous translucent objects for which classical approximations such as the dipole approximation to the di usion equation are no longer valid. Motivated by the exponential fall-o of scattered intensity with distance, di use subsurface scattering is represented as a sum of exponentials per surface point plus a modulation texture. Modeling quality can be improved by using an anisotropic model where exponential parameters are determined per surface location and scattering direction. We validate the scattering model for a set of planar object samples which were recorded under controlled conditions and quantify the modeling error. Furthermore, several translucent objects with complex geometry are captured and compared to the real object under similar illumination conditions.
    Export
  • Martin Fuchs, Volker Blanz, Hendrik P. A. Lensch, and Hans-Peter Seidel, "Reflectance from images: a model-based approach for human faces", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 33 p.  
    Citation
    Martin Fuchs, Volker Blanz, Hendrik P. A. Lensch, and Hans-Peter Seidel, "Reflectance from images: a model-based approach for human faces", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 33 p.
    Abstract
    In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a-priori. We apply analytical BRDF models to express the reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting conditions.
    Export
  • Craig Gotsman, Kanela Kaligosi, Kurt Mehlhorn, Dimitrios Michail, and Evangelia Pyrga, "Cycle bases of graphs and sampled manifolds", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 30 p.  
    Citation
    Craig Gotsman, Kanela Kaligosi, Kurt Mehlhorn, Dimitrios Michail, and Evangelia Pyrga, "Cycle bases of graphs and sampled manifolds", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 30 p.
    Abstract
    Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments.
    Export
  • Jörg Hoffmann, Carla Gomes, and Bart Selman, "Bottleneck behavior in CNF formulas", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 35 p.  
    Citation
    Jörg Hoffmann, Carla Gomes, and Bart Selman, "Bottleneck behavior in CNF formulas", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 35 p.
    Abstract
    Recent research has revealed that CNF formulas encoding applications often contain very small backdoors, i.e. subsets of variables branching over which suffices to decide satisfiability with a DPLL procedure. We shed light on what kinds of problem structure {\em cause} the existence of such small backdoors. We examine carefully crafted synthetic domains, clean enough to allow a rigorous analysis of DPLL search spaces, yet meaningful enough to allow conclusions about more practical domains. The synthetic CNF encodings are parameterized not only in their size, but also by a structure parameter controlling the extent of an intuitive bottleneck behavior in the underlying task. We investigate the size of the smallest possible backdoors as a function of the structure parameter. We prove upper bounds that, in certain cases, decrease {\em exponentially} in that parameter. We empirically verified the upper bounds as lower bounds in formulas small enough for full empirical exploration, and we conjecture that the upper bounds are exact in general. We present empirical results indicating that our notion of bottleneck behavior is relevant not only in our synthetic domains, but also in more practical domains.
    Export
  • Irit Katriel and Martin Kutz, "A faster algorithm for computing a longest common increasing subsequence", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 13 p.  
    Citation
    Irit Katriel and Martin Kutz, "A faster algorithm for computing a longest common increasing subsequence", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 13 p.
    Abstract
    Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.
    Export
  • Irit Katriel, Martin Kutz, and Martin Skutella, "Reachability substitutes for planar digraphs", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 24 p.  
    Citation
    Irit Katriel, Martin Kutz, and Martin Skutella, "Reachability substitutes for planar digraphs", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 24 p.
    Abstract
    Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.
    Export
  • Grzegorz Krawczyk, Michael Gösele, and Hans-Peter Seidel, "Photometric calibration of high dynamic range cameras", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 21 p.  
    Citation
    Grzegorz Krawczyk, Michael Gösele, and Hans-Peter Seidel, "Photometric calibration of high dynamic range cameras", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 21 p.
    Export
  • Torsten Langer, Alexander Belyaev, and Hans-Peter Seidel, "Analysis and design of discrete normals and curvatures", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 42 p.  
    Citation
    Torsten Langer, Alexander Belyaev, and Hans-Peter Seidel, "Analysis and design of discrete normals and curvatures", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 42 p.
    Abstract
    Accurate estimations of geometric properties of a surface (a curve) from its discrete approximation are important for many computer graphics and computer vision applications. To assess and improve the quality of such an approximation we assume that the smooth surface (curve) is known in general form. Then we can represent the surface (curve) by a Taylor series expansion and compare its geometric properties with the corresponding discrete approximations. In turn we can either prove convergence of these approximations towards the true properties as the edge lengths tend to zero, or we can get hints how to eliminate the error. In this report we propose and study discrete schemes for estimating the curvature and torsion of a smooth 3D curve approximated by a polyline. Thereby we make some interesting findings about connections between (smooth) classical curves and certain estimation schemes for polylines. Furthermore, we consider several popular schemes for estimating the surface normal of a dense triangle mesh interpolating a smooth surface, and analyze their asymptotic properties. Special attention is paid to the mean curvature vector, that approximates both, normal direction and mean curvature. We evaluate a common discrete approximation and show how asymptotic analysis can be used to improve it. It turns out that the integral formulation of the mean curvature \begin{equation*} H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi, \end{equation*} can be computed by an exact quadrature formula. The same is true for the integral formulations of Gaussian curvature and the Taubin tensor. The exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface approximated by a dense triangle mesh. The proposed method is fast and often demonstrates a better performance than conventional curvature tensor estimation approaches. We also show that the curvature tensor approximated by our approach converges towards the true curvature tensor as the edge lengths tend to zero.
    Export
  • Patrick Maier, Witold Charatonik, and Lilia Georgieva, "Bounded model checking of pointer programs", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 34 p.  
    Citation
    Patrick Maier, Witold Charatonik, and Lilia Georgieva, "Bounded model checking of pointer programs", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 34 p.
    Abstract
    We propose a bounded model checking procedure for programs manipulating dynamically allocated pointer structures. Our procedure checks whether a program execution of length n ends in an error (e.g., a NULL dereference) by testing if the weakest precondition of the error condition together with the initial condition of the program (e.g., program variable x points to a circular list) is satisfiable. We express error conditions as formulas in the 2-variable fragment of the Bernays-Schoenfinkel class with equality. We show that this fragment is closed under computing weakest preconditions. We express the initial conditions by unary relations which are defined by monadic Datalog programs. Our main contribution is a small model theorem for the 2-variable fragment of the Bernays-Schoenfinkel class extended with least fixed points expressible by certain monadic Datalog programs. The decidability of this extension of first-order logic gives us a bounded model checking procedure for programs manipulating dynamically allocated pointer structures. In contrast to SAT-based bounded model checking, we do not bound the size of the heap a priori, but allow for pointer structures of arbitrary size. Thus, we are doing bounded model checking of infinite state transition systems.
    Export
  • Dimitrios Michail, "Rank-maximal through maximum weight matchings", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 22 p.  
    Citation
    Dimitrios Michail, "Rank-maximal through maximum weight matchings", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 22 p.
    Abstract
    Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.
    Export
  • Oliver Schall, Alexander Belyaev, and Hans-Peter Seidel, "Sparse meshing of uncertain and noisy surface scattered data", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 20 p.  
    Citation
    Oliver Schall, Alexander Belyaev, and Hans-Peter Seidel, "Sparse meshing of uncertain and noisy surface scattered data", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 20 p.
    Abstract
    In this paper, we develop a method for generating a high-quality approximation of a noisy set of points sampled from a smooth surface by a sparse triangle mesh. The main idea of the method consists of defining an appropriate set of approximation centers and use them as the vertices of a mesh approximating given scattered data. To choose the approximation centers, a clustering procedure is used. With every point of the input data we associate a local uncertainty measure which is used to estimate the importance of the point contribution to the reconstructed surface. Then a global uncertainty measure is constructed from local ones. The approximation centers are chosen as the points where the global uncertainty measure attains its local minima. It allows us to achieve a high-quality approximation of uncertain and noisy point data by a sparse mesh. An interesting feature of our approach is that the uncertainty measures take into account the normal directions estimated at the scattered points. In particular it results in accurate reconstruction of high-curvature regions.
    Export
  • Stefan Siersdorfer and Gerhard Weikum, "Automated retraining methods for document classification and their parameter tuning", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 23 p.  
    Citation
    Stefan Siersdorfer and Gerhard Weikum, "Automated retraining methods for document classification and their parameter tuning", in Research Report / Max-Planck-Institut für Informatik, (2005), pp. 23 p.
    Abstract
    This paper addresses the problem of semi-supervised classification on document collections using retraining (also called self-training). A possible application is focused Web crawling which may start with very few, manually selected, training documents but can be enhanced by automatically adding initially unlabeled, positively classified Web pages for retraining. Such an approach is by itself not robust and faces tuning problems regarding parameters like the number of selected documents, the number of retraining iterations, and the ratio of positive and negative classified samples used for retraining. The paper develops methods for automatically tuning these parameters, based on predicting the leave-one-out error for a re-trained classifier and avoiding that the classifier is diluted by selecting too many or weak documents for retraining. Our experiments with three different datasets confirm the practical viability of the approach.
    Export
  • Christian Theobalt, Naveed Ahmed, Edilson de Aguiar, Gernot Ziegler, Hendrik Lensch, Marcus Magnor, and Hans-Peter Seidel, "Joint Motion and Reflectance Capture for Creating Relightable 3D Videos", (2005).  
    Citation
    Christian Theobalt, Naveed Ahmed, Edilson de Aguiar, Gernot Ziegler, Hendrik Lensch, Marcus Magnor, and Hans-Peter Seidel, "Joint Motion and Reflectance Capture for Creating Relightable 3D Videos", (2005).
    Abstract
    \begin{abstract} Passive optical motion capture is able to provide authentically animated, photo-realistically and view-dependently textured models of real people. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We describe a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}
    Export
  • Kirill Dmitriev, Vlastimil Havran, and Hans-Peter Seidel, "Faster ray tracing with SIMD shaft culling", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 13 p.  
    Citation
    Kirill Dmitriev, Vlastimil Havran, and Hans-Peter Seidel, "Faster ray tracing with SIMD shaft culling", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 13 p.
    Export
  • Silke Wagner, "Summaries for while programs with recursion", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 18 p.  
    Citation
    Silke Wagner, "Summaries for while programs with recursion", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 18 p.
    Export
  • Ioannis Ivrissimtzis, Won-Ki Jeong, Seungyong Lee, Yunjin Lee, and Hans-Peter Seidel, "Neural meshes: surface reconstruction with a learning algorithm", in Research Report, (2004), pp. 16 p.  
    Citation
    Ioannis Ivrissimtzis, Won-Ki Jeong, Seungyong Lee, Yunjin Lee, and Hans-Peter Seidel, "Neural meshes: surface reconstruction with a learning algorithm", in Research Report, (2004), pp. 16 p.
    Export
  • Patrick Maier, "Intuitionistic LTL and a New Characterization of Safety and Liveness", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 18 p.  
    Citation
    Patrick Maier, "Intuitionistic LTL and a New Characterization of Safety and Liveness", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 18 p.
    Export
  • Susanne Schmitt and Laurent Fousse, "A comparison of polynomial evaluation schemes", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Becker and Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 16 p.  
    Citation
    Susanne Schmitt and Laurent Fousse, "A comparison of polynomial evaluation schemes", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Becker and Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 16 p.
    Export
  • Rhaleb Zayer, Christian Rössl, and Hans-Peter Seidel, "r-Adaptive parameterization of surfaces", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 10 p.  
    Citation
    Rhaleb Zayer, Christian Rössl, and Hans-Peter Seidel, "r-Adaptive parameterization of surfaces", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 10 p.
    Export
  • Jörg Haber, Carina Schmitt, Martin Koster, and Hans-Peter Seidel, "Modeling hair using a wisp hair model", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 38 p.  
    Citation
    Jörg Haber, Carina Schmitt, Martin Koster, and Hans-Peter Seidel, "Modeling hair using a wisp hair model", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 38 p.
    Export
  • Stefan Siersdorfer, Sergej Sizov, and Gerhard Weikum, "Goal-oriented methods and meta methods for document classification and their parameter tuning", in Research Report, (2004), pp. 36 p.  
    Citation
    Stefan Siersdorfer, Sergej Sizov, and Gerhard Weikum, "Goal-oriented methods and meta methods for document classification and their parameter tuning", in Research Report, (2004), pp. 36 p.
    Export
  • Naveen Sivadasan, Peter Sanders, and Martin Skutella, "On scheduling with bounded migration", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 22 p.  
    Citation
    Naveen Sivadasan, Peter Sanders, and Martin Skutella, "On scheduling with bounded migration", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 22 p.
    Export
  • Hans de Nivelle and Yevgeny Kazakov, "Resolution decision procedures for the guarded fragment with transitive guards", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 25 p.  
    Citation
    Hans de Nivelle and Yevgeny Kazakov, "Resolution decision procedures for the guarded fragment with transitive guards", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 25 p.
    Export
  • Marcus Magnor, "Axisymmetric reconstruction and 3D visualization of bipolar planetary nebulae", in Research Report, (2004), pp. 20 p.  
    Citation
    Marcus Magnor, "Axisymmetric reconstruction and 3D visualization of bipolar planetary nebulae", in Research Report, (2004), pp. 20 p.
    Export
  • Irit Katriel, "On algorithms for online topological ordering and sorting", in Research Report, (2004), pp. 12 p.  
    Citation
    Irit Katriel, "On algorithms for online topological ordering and sorting", in Research Report, (2004), pp. 12 p.
    Export
  • Nicolas Beldiceanu, Irit Katriel, and Sven Thiel, "Filtering algorithms for the Same and UsedBy constraints", in Research Report, (2004), pp. 33 p.  
    Citation
    Nicolas Beldiceanu, Irit Katriel, and Sven Thiel, "Filtering algorithms for the Same and UsedBy constraints", in Research Report, (2004), pp. 33 p.
    Export
  • Peter Sanders and Seth Pettie, "A simpler linear time 2/3-epsilon approximation", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 7 p.  
    Citation
    Peter Sanders and Seth Pettie, "A simpler linear time 2/3-epsilon approximation", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 7 p.
    Export
  • Eric Berberich, Arno Eigenwillig, Michael Hemmer, Susan Hert, Lutz Kettner, Kurt Mehlhorn, Joachim Reichel, Susanne Schmitt, Elmar Schömer, Dennis Weber, and Nicola Wolpert, "EXACUS: Efficient and Exact Algorithms for Curves and Surfaces", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 8 p.  
    Citation
    Eric Berberich, Arno Eigenwillig, Michael Hemmer, Susan Hert, Lutz Kettner, Kurt Mehlhorn, Joachim Reichel, Susanne Schmitt, Elmar Schömer, Dennis Weber, and Nicola Wolpert, "EXACUS: Efficient and Exact Algorithms for Curves and Surfaces", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 8 p.
    Export
  • Eric Berberich, Arno Eigenwillig, Ioannis Emiris, Efraim Fogel, Michael Hemmer, Dan Halperin, Athanasios Kakargias, Lutz Kettner, Kurt Mehlhorn, Sylvain Pion, Elmar Schömer, Monique Teillaud, Ron Wein, and Nicola Wolpert, "An empirical comparison of software for constructing arrangements of curved arcs", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 11 p.  
    Citation
    Eric Berberich, Arno Eigenwillig, Ioannis Emiris, Efraim Fogel, Michael Hemmer, Dan Halperin, Athanasios Kakargias, Lutz Kettner, Kurt Mehlhorn, Sylvain Pion, Elmar Schömer, Monique Teillaud, Ron Wein, and Nicola Wolpert, "An empirical comparison of software for constructing arrangements of curved arcs", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 11 p.
    Export
  • Bruno Blanchet, "Automatic Proof of Strong Secrecy for Security Protocols", in Research Report, (2004).  
    Citation
    Bruno Blanchet, "Automatic Proof of Strong Secrecy for Security Protocols", in Research Report, (2004).
    Abstract
    We present a new automatic technique for proving strong secrecy for security protocols. Strong secrecy means that an adversary cannot see any difference when the value of the secret changes. Our technique relies on an automatic translation of the protocol into Horn clauses, and a resolution algorithm on the clauses. Applying this technique to strong secrecy requires important extensions with respect to previous work for the proof of (standard) secrecy and authenticity. This technique can handle a wide range of cryptographic primitives, and yields proofs valid for an unbounded number of sessions and an unbounded message space; it is also flexible and efficient. We have proved its correctness, implemented it, and tested it on several examples of protocols including JFK (a proposed replacement for IKE in IPsec).
    Export
  • L. Sunil Chandran and Naveen Sivadasan, "On the Hadwiger's conjecture for graph products", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 10 p.  
    Citation
    L. Sunil Chandran and Naveen Sivadasan, "On the Hadwiger's conjecture for graph products", in Max-Planck-Institut für Informatik <Saarbrücken>: Research Report, edited by Max-Planck-Institut für Informatik <Saarbrücken> (2004), pp. 10 p.
    Export
  • L. Sunil Chandran and N. Sivadasan, "On the Hadwiger's Conjecture for Graphs Products", in Research Report, (2004).  
    Citation
    L. Sunil Chandran and N. Sivadasan, "On the Hadwiger's Conjecture for Graphs Products", in Research Report, (2004).
    Export
  • Stefan Funke, Kurt Mehlhorn, Susanne Schmitt, Christoph Burnikel, Rudolf Fleischer, and Stefan Schirra, "The LEDA class real number - extended version", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 2 p.  
    Citation
    Stefan Funke, Kurt Mehlhorn, Susanne Schmitt, Christoph Burnikel, Rudolf Fleischer, and Stefan Schirra, "The LEDA class real number - extended version", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 2 p.
    Export
  • Michael Hemmer, Lutz Kettner, and Elmar Schömer, "Effects of a modular filter on geometric applications", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 7 p.  
    Citation
    Michael Hemmer, Lutz Kettner, and Elmar Schömer, "Effects of a modular filter on geometric applications", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 7 p.
    Export
  • Lutz Kettner, Kurt Mehlhorn, Sylvain Pion, Stefan Schirra, and Chee Yap, "Classroom examples of robustness problems in geometric computations", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), Vol. 3221, pp. 12 p.  
    Citation
    Lutz Kettner, Kurt Mehlhorn, Sylvain Pion, Stefan Schirra, and Chee Yap, "Classroom examples of robustness problems in geometric computations", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), Vol. 3221, pp. 12 p.
    Export
  • Christian Klein, "A fast root checking algorithm", in ECG Technical Report, edited by Effective computational geometry for curves and surfaces (2004), pp. 11 p.  
    Citation
    Christian Klein, "A fast root checking algorithm", in ECG Technical Report, edited by Effective computational geometry for curves and surfaces (2004), pp. 11 p.
    Export
  • Werner Krandick and Kurt Mehlhorn, "New bounds for the Descartes method", in Drexel University / Department of Computer Science:Technical Report, edited by Drexel University <Philadelphia, Pa.> / Department of Computer Science (2004), pp. 18 p.  
    Citation
    Werner Krandick and Kurt Mehlhorn, "New bounds for the Descartes method", in Drexel University / Department of Computer Science:Technical Report, edited by Drexel University <Philadelphia, Pa.> / Department of Computer Science (2004), pp. 18 p.
    Export
  • Peter Sanders and Seth Pettie, "A simpler linear time 2/3 - epsilon approximation for maximum weight matching", in Research Report / Max-Planck-Institut für Informatik, (2004), pp. 10 p.  
    Citation
    Peter Sanders and Seth Pettie, "A simpler linear time 2/3 - epsilon approximation for maximum weight matching", in Research Report / Max-Planck-Institut für Informatik, (2004), pp. 10 p.
    Abstract
    We present two $\twothirds - \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.
    Export
  • Susanne Schmitt, "Improved separation bounds for the diamond operator", in ECG Techical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 13 p.  
    Citation
    Susanne Schmitt, "Improved separation bounds for the diamond operator", in ECG Techical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 13 p.
    Export
  • Susanne Schmitt, "Common subexpression search in LEDA_reals: a study of the diamond-operator", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 5 p.  
    Citation
    Susanne Schmitt, "Common subexpression search in LEDA_reals: a study of the diamond-operator", in ECG Technical Report, edited by Effective Computational Geometry for Curves and Surfaces (2004), pp. 5 p.
    Export
  • Naveen Sivadasan, Peter Sanders, and Martin Skutella, "Online scheduling with bounded migration", in Research Report / Max-Planck-Institut für Informatik, (2004), pp. 21 p.  
    Citation
    Naveen Sivadasan, Peter Sanders, and Martin Skutella, "Online scheduling with bounded migration", in Research Report / Max-Planck-Institut für Informatik, (2004), pp. 21 p.
    Export
  • MPI for Informatics, Max Planck Society (ed), "Sixth Biennial Report: August 2001 - May 2003", in Biennial Report / Max-Planck-Institut für Informatik, (2003), Vol. 6, pp. 526 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Sixth Biennial Report: August 2001 - May 2003", in Biennial Report / Max-Planck-Institut für Informatik, (2003), Vol. 6, pp. 526 p.
    Export
  • Ernst Althaus, Tobias Polzin, and Siavash Daneshmand, "Improving linear programming approaches for the Steiner tree problem", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 19 p.  
    Citation
    Ernst Althaus, Tobias Polzin, and Siavash Daneshmand, "Improving linear programming approaches for the Steiner tree problem", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 19 p.
    Abstract
    We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved.
    Export
  • René Beier and Berthold Vöcking, "Random knapsack in expected polynomial time", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.  
    Citation
    René Beier and Berthold Vöcking, "Random knapsack in expected polynomial time", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.
    Abstract
    In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones.
    Export
  • Philippe Bekaert, Philipp Slusallek, Ronald Cools, Vlastimil Havran, and Hans-Peter Seidel, "A custom designed density estimation method for light transport", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 28 p.  
    Citation
    Philippe Bekaert, Philipp Slusallek, Ronald Cools, Vlastimil Havran, and Hans-Peter Seidel, "A custom designed density estimation method for light transport", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 28 p.
    Abstract
    We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping.
    Export
  • Sunil Chandran Leela and C. R. Subramanian, "Girth and treewidth", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 11 p.  
    Citation
    Sunil Chandran Leela and C. R. Subramanian, "Girth and treewidth", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 11 p.
    Export
  • Bela Csaba, "On the Bollob\'as -- Eldridge conjecture for bipartite graphs", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 29 p.  
    Citation
    Bela Csaba, "On the Bollob\'as -- Eldridge conjecture for bipartite graphs", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 29 p.
    Abstract
    Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.
    Export
  • Martin Dietzfelbinger and Hisao Tamaki, "On the probability of rendezvous in graphs", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 30 p.  
    Citation
    Martin Dietzfelbinger and Hisao Tamaki, "On the probability of rendezvous in graphs", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 30 p.
    Abstract
    In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$.
    Export
  • Martin Dietzfelbinger and Philipp Woelfel, "Almost random graphs with simple hash functions", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 23 p.  
    Citation
    Martin Dietzfelbinger and Philipp Woelfel, "Almost random graphs with simple hash functions", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 23 p.
    Abstract
    We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.
    Export
  • Friedrich Eisenbrand, "Fast Integer Programming in Fixed Dimension", in Research Report, (2003), pp. 14 p.  
    Citation
    Friedrich Eisenbrand, "Fast Integer Programming in Fixed Dimension", in Research Report, (2003), pp. 14 p.
    Abstract
    Dimension with a fixed number of constraints can be computed with $O(s)$ basic arithmetic operations, where $s$ is the binary encoding length of the input. This improves on the quadratic running time of previous algorithms, which are based on Lenstra's algorithm and binary search. It follows that an integer program in fixed dimension, which is defined by $m$ constraints, each of binary encoding length at most $s$, can be solved with an expected number of $O(m + \log m \, s)$ arithmetic operations using Clarkson's random sampling algorithm.
    Export
  • Efi Fogel, Dan Halperin, Ron Wein, Monique Teillaud, Eric Berberich, Arno Eigenwillig, Susan Hert, and Lutz Kettner, "Specification of the Traits Classes for CGAL Arrangements of Curves", in Technical Report, (2003).  
    Citation
    Efi Fogel, Dan Halperin, Ron Wein, Monique Teillaud, Eric Berberich, Arno Eigenwillig, Susan Hert, and Lutz Kettner, "Specification of the Traits Classes for CGAL Arrangements of Curves", in Technical Report, (2003).
    Export
  • Thomas Hangelbroek, Günther Nürnberger, Christian Rössl, Hans-Peter Seidel, and Frank Zeilfelder, "The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 39 p.  
    Citation
    Thomas Hangelbroek, Günther Nürnberger, Christian Rössl, Hans-Peter Seidel, and Frank Zeilfelder, "The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 39 p.
    Abstract
    We consider the linear space of piecewise polynomials in three variables which are globally smooth, i.e., trivariate $C^1$ splines. The splines are defined on a uniform tetrahedral partition $\Delta$, which is a natural generalization of the four-directional mesh. By using Bernstein-B{\´e}zier techniques, we establish formulae for the dimension of the $C^1$ splines of arbitrary degree.
    Export
  • Manfred Jaeger, "A representation theorem and applications to measure selection and noninformative priors", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 35 p.  
    Citation
    Manfred Jaeger, "A representation theorem and applications to measure selection and noninformative priors", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 35 p.
    Abstract
    We introduce a set of transformations on the set of all probability distributions over a finite state space, and show that these transformations are the only ones that preserve certain elementary probabilistic relationships. This result provides a new perspective on a variety of probabilistic inference problems in which invariance considerations play a role. Two particular applications we consider in this paper are the development of an equivariance-based approach to the problem of measure selection, and a new justification for Haldane's prior as the distribution that encodes prior ignorance about the parameter of a multinomial distribution.
    Export
  • Irit Katriel and Sven Thiel, "Fast bound consistency for the global cardinality constraint", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 30 p.  
    Citation
    Irit Katriel and Sven Thiel, "Fast bound consistency for the global cardinality constraint", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 30 p.
    Abstract
    We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.
    Export
  • Yevgeny Kazakov and Hans de Nivelle, "Subsumption of concepts in $DL$ ${\cal FL}_0$ for (cyclic) terminologies with respect to descriptive semantics is PSPACE-complete", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 12 p.  
    Citation
    Yevgeny Kazakov and Hans de Nivelle, "Subsumption of concepts in $DL$ ${\cal FL}_0$ for (cyclic) terminologies with respect to descriptive semantics is PSPACE-complete", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 12 p.
    Abstract
    We prove the PSPACE-completeness of the subsumption problem for(cyclic) terminologies with respect to descriptive semantics in a simple Description Logic ${\cal FL}_0$, which allows for conjunctions and universal value restrictions only, thus solving the problem which was open for more than ten years.
    Export
  • Annamaria Kovács, "Sum-Multicoloring on paths", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 20 p.  
    Citation
    Annamaria Kovács, "Sum-Multicoloring on paths", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 20 p.
    Abstract
    The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries over to cycles.
    Export
  • Piotr Krysta, Peter Sanders, and Berthold Vöcking, "Scheduling and traffic allocation for tasks with bounded splittability", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 15 p.  
    Citation
    Piotr Krysta, Peter Sanders, and Berthold Vöcking, "Scheduling and traffic allocation for tasks with bounded splittability", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 15 p.
    Abstract
    We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.
    Export
  • Piotr Krysta, Artur Czumaj, and Berthold Vöcking, "Selfish traffic allocation for server farms", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 43 p.  
    Citation
    Piotr Krysta, Artur Czumaj, and Berthold Vöcking, "Selfish traffic allocation for server farms", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 43 p.
    Abstract
    We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.
    Export
  • Patrick Maier, "Compositional circular assume-guarantee rules cannot be sound and complete", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.  
    Citation
    Patrick Maier, "Compositional circular assume-guarantee rules cannot be sound and complete", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.
    Abstract
    Circular assume-guarantee reasoning is used for the compositional verification of concurrent systems. Its soundness has been studied in depth, perhaps because circularity makes it anything but obvious. In this paper, we investigate completeness. We show that compositional circular assume-guarantee rules cannot be both sound and complete.
    Export
  • Andreas Podelski and Andrey Rybalchenko, "Software model checking of liveness properties via transition invariants", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 29 p.  
    Citation
    Andreas Podelski and Andrey Rybalchenko, "Software model checking of liveness properties via transition invariants", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 29 p.
    Abstract
    Model checking is an automated method to prove safety and liveness properties for finite systems. Software model checking uses predicate abstraction to compute invariants and thus prove safety properties for infinite-state programs. We address the limitation of current software model checking methods to safety properties. Our results are a characterization of the validity of a liveness property by the existence of transition invariants, and a method that uses transition predicate abstraction to compute transition invariants and thus prove liveness properties for infinite-state programs.
    Export
  • Christian Rössl, Frank Zeilfelder, Günther Nürnberger, and Hans-Peter Seidel, "Visualization of volume data with quadratic super splines", in Research Report, (2003), pp. 15 p.  
    Citation
    Christian Rössl, Frank Zeilfelder, Günther Nürnberger, and Hans-Peter Seidel, "Visualization of volume data with quadratic super splines", in Research Report, (2003), pp. 15 p.
    Abstract
    We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic, trivariate super splines on a uniform tetrahedral partition $\Delta$. The approximating splines are determined in a natural and completely symmetric way by averaging local data samples such that appropriate smoothness conditions are automatically satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of total degree two which provides several advantages including the e cient computation, evaluation and visualization of the model. We apply Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric Design to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized e ciently e.g. with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined in an analytic and exact way. Our results confirm the e ciency of the method and demonstrate a high visual quality for rendered isosurfaces.
    Export
  • Peter Sanders, "Polynomial time algorithms for network information flow", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 18 p.  
    Citation
    Peter Sanders, "Polynomial time algorithms for network information flow", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 18 p.
    Abstract
    The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller.
    Export
  • Peter Sanders and Roman Dementiev, "Asynchronous parallel disk sorting", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.  
    Citation
    Peter Sanders and Roman Dementiev, "Asynchronous parallel disk sorting", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.
    Abstract
    We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.
    Export
  • Guido Schäfer and Naveen Sivadasan, "Topology matters: smoothed competitive analysis of metrical task systems", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 28 p.  
    Citation
    Guido Schäfer and Naveen Sivadasan, "Topology matters: smoothed competitive analysis of metrical task systems", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 28 p.
    Abstract
    We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation.
    Export
  • Guido Schäfer and Stefano Leonardi, "Cross-monotonic cost sharing methods for connected facility location games", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 10 p.  
    Citation
    Guido Schäfer and Stefano Leonardi, "Cross-monotonic cost sharing methods for connected facility location games", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 10 p.
    Abstract
    We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.
    Export
  • Guido Schäfer, "A note on the smoothed complexity of the single-source shortest path problem", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 8 p.  
    Citation
    Guido Schäfer, "A note on the smoothed complexity of the single-source shortest path problem", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 8 p.
    Abstract
    Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$ with high probability if the random replacements are chosen independently.
    Export
  • Guido Schäfer, Luca Becchetti, Stefano Leonardi, Alberto Marchetti-Spaccamela, and Tjark Vredeveld, "Average case and smoothed competitive analysis of the multi-level feedback algorithm", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 31 p.  
    Citation
    Guido Schäfer, Luca Becchetti, Stefano Leonardi, Alberto Marchetti-Spaccamela, and Tjark Vredeveld, "Average case and smoothed competitive analysis of the multi-level feedback algorithm", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 31 p.
    Abstract
    In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution.
    Export
  • Susanne Schmitt, "The Diamond Operator for Real Algebraic Numbers", (Effective Computational Geometry for Curves and Surfaces, Sophia Antipolis, FRANCE, 2003).  
    Citation
    Susanne Schmitt, "The Diamond Operator for Real Algebraic Numbers", (Effective Computational Geometry for Curves and Surfaces, Sophia Antipolis, FRANCE, 2003).
    Abstract
    Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension package.
    Export
  • Hisao Tamaki, "Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.  
    Citation
    Hisao Tamaki, "Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 22 p.
    Abstract
    A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.
    Export
  • Hisao Tamaki, "A linear time heuristic for the branch-decomposition of planar graphs", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 18 p.  
    Citation
    Hisao Tamaki, "A linear time heuristic for the branch-decomposition of planar graphs", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 18 p.
    Abstract
    Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.
    Export
  • Marco Tarini, Hendrik P. A. Lensch, Michael Gösele, and Hans-Peter Seidel, "3D acquisition of mirroring objects", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 37 p.  
    Citation
    Marco Tarini, Hendrik P. A. Lensch, Michael Gösele, and Hans-Peter Seidel, "3D acquisition of mirroring objects", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 37 p.
    Abstract
    Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies in order to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.
    Export
  • Christian Theobalt, Ming Li, Marcus A. Magnor, and Hans-Peter Seidel, "A flexible and versatile studio for synchronized multi-view video recording", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 18 p.  
    Citation
    Christian Theobalt, Ming Li, Marcus A. Magnor, and Hans-Peter Seidel, "A flexible and versatile studio for synchronized multi-view video recording", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 18 p.
    Abstract
    In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors. ~
    Export
  • Nordin Zakaria, "FaceSketch: an interface for sketching and coloring cartoon faces", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 30 p.  
    Citation
    Nordin Zakaria, "FaceSketch: an interface for sketching and coloring cartoon faces", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 30 p.
    Abstract
    We discuss FaceSketch, an interface for 2D facial human-like cartoon sketching. The basic paradigm in FaceSketch is to offer a 2D interaction style and feel while employing 3D techniques to facilitate various tasks involved in drawing and redrawing faces from different views. The system works by accepting freeform strokes denoting head, eyes, nose, and other facial features, constructing an internal 3D model that conforms to the input silhouettes, and redisplaying the result in simple sketchy style from any user-specified viewing direction. In a manner similar to conventional 2D drawing process, the displayed shape may be changed by oversketching silhouettes, and hatches and strokes may be added within its boundary. Implementation-wise, we demonstrate the feasibility of using simple point primitive as a fundamental 3D modeling primitive in a sketch-based system. We discuss relatively simple but robust and efficient point-based algorithms for shape inflation, modification and display in 3D view. We discuss the feasibility of our ideas using a number of example interactions and facial sketches.
    Export
  • Rhaleb Zayer, Christian Rössl, and Hans-Peter Seidel, "Convex boundary angle based flattening", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 16 p.  
    Citation
    Rhaleb Zayer, Christian Rössl, and Hans-Peter Seidel, "Convex boundary angle based flattening", in Research Report / Max-Planck-Institut für Informatik, (2003), pp. 16 p.
    Abstract
    Angle Based Flattening is a robust parameterization method that finds a quasi-conformal mapping by solving a non-linear optimization problem. We take advantage of a characterization of convex planar drawings of triconnected graphs to introduce new boundary constraints. This prevents boundary intersections and avoids post-processing of the parameterized mesh. We present a simple transformation to e ectively relax the constrained minimization problem, which improves the convergence of the optimization method. As a natural extension, we discuss the construction of Delaunay flat meshes. This may further enhance the quality of the resulting parameterization.
    Export
  • Nicolas Beldiceanu, Mats Carlsson, and Sven Thiel, "Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint", in SICS Technical Report, (Swedish Institute of Computer Science, Kista, 2002).  
    Citation
    Nicolas Beldiceanu, Mats Carlsson, and Sven Thiel, "Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint", in SICS Technical Report, (Swedish Institute of Computer Science, Kista, 2002).
    Abstract
    This article introduces the sum of weights of distinct values constraint, which can be seen as a generalization of the number of distinct values as well as of the alldifferent, and the relaxed alldifferent constraints. This constraint holds if a cost variable is equal to the sum of the weights associated to the distinct values taken by a given set of variables. For the first aspect, which is related to domination, we present four filtering algorithms. Two of them lead to perfect pruning when each domain variable consists of one set of consecutive values, while the two others take advantage of holes in the domains. For the second aspect, which is connected to maximum matching in a bipartite graph, we provide a complete filtering algorithm for the general case. Finally we introduce several generic deduction rules, which link both aspects of the constraint. These rules can be applied to other optimization constraints such as the minimum weight alldifferent constraint or the global cardinality constraint with costs. They also allow taking into account external constraints for getting enhanced bounds for the cost variable. In practice, the sum of weights of distinct values constraint occurs in assignment problems where using a resource once or several times costs the same. It also captures domination problems where one has to select a set of vertices in order to control every vertex of a graph.
    Export
  • Witold Charatonik and Jean-Marc Talbot, "Atomic set constraints with projection", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 20 p.  
    Citation
    Witold Charatonik and Jean-Marc Talbot, "Atomic set constraints with projection", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 20 p.
    Abstract
    We investigate a class of set constraints defined as atomic set constraints augmented with projection. This class subsumes some already studied classes such as atomic set constraints with left-hand side projection and INES constraints. All these classes enjoy the nice property that satisfiability can be tested in cubic time. This is in contrast to several other classes of set constraints, such as definite set constraints and positive set constraints, for which satisfiability ranges from DEXPTIME-complete to NEXPTIME-complete. However, these latter classes allow set operators such as intersection or union which is not the case for the class studied here. In the case of atomic set constraints with projection one might expect that satisfiability remains polynomial. Unfortunately, we show that that the satisfiability problem for this class is no longer polynomial, but CoNP-hard. Furthermore, we devise a PSPACE algorithm to solve this satisfiability problem.
    Export
  • Witold Charatonik and Harald Ganzinger, "Symposium on the Effectiveness of Logic in Computer Science in Honour of Moshe Vardi", in Research Report, (2002), pp. 64 p.  
    Citation
    Witold Charatonik and Harald Ganzinger, "Symposium on the Effectiveness of Logic in Computer Science in Honour of Moshe Vardi", in Research Report, (2002), pp. 64 p.
    Abstract
    This report contains abstracts of the talks given at the Symposium on the Effectiveness of Logic in Computer Science in Saarbrücken on March 4-6, 2002. The symposium was held in honour of Professor Moshe Vardi, Rice University, on the ocasion of awarding him a Honorary Doctoral Degree from the Saarland University for his outstanding contributions to the area of Logic in Computer Science.
    Export
  • Frederic Drago, William Martens, Karol Myszkowski, and Hans-Peter Seidel, "Perceptual evaluation of tone mapping operators with regard to similarity and preference", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 30 p.  
    Citation
    Frederic Drago, William Martens, Karol Myszkowski, and Hans-Peter Seidel, "Perceptual evaluation of tone mapping operators with regard to similarity and preference", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 30 p.
    Abstract
    Seven tone mapping methods currently available to display high dynamic range images were submitted to perceptual evaluation in order to find the attributes most predictive of the success of a robust all-around tone mapping algorithm. The two most salient Stimulus Space dimensions underlying the perception of a set of images produced by six of the tone mappings were revealed using INdividual Differences SCALing (INDSCAL) analysis; and an ideal preference point within the INDSCAL-derived Stimulus Space was determined for a group of 11 observers using PREFerence MAPping (PREFMAP) analysis. Interpretation of the INDSCAL results was aided by pairwise comparisons of images that led to an ordering of the images according to which were more or less natural looking.
    Export
  • Arno Eigenwillig, Elmar Schömer, and Nicola Wolpert, "Sweeping Arrangements of Cubic Segments Exactly and Efficiently", (Effective Computational Geometry for Curves and Surfaces, Sophia Antipolis, 2002).  
    Citation
    Arno Eigenwillig, Elmar Schömer, and Nicola Wolpert, "Sweeping Arrangements of Cubic Segments Exactly and Efficiently", (Effective Computational Geometry for Curves and Surfaces, Sophia Antipolis, 2002).
    Abstract
    A method is presented to compute the planar arrangement induced by segments of algebraic curves of degree three (or less), using an improved Bentley-Ottmann sweep-line algorithm. Our method is exact (it provides the mathematically correct result), complete (it handles all possible geometric degeneracies), and efficient (the implementation can handle hundreds of segments). The range of possible input segments comprises conic arcs and cubic splines as special cases of particular practical importance.
    Export
  • Michael Gösele, Jan Kautz, Jochen Lang, Hendrik P. A. Lensch, and Hans-Peter Seidel, "Tutorial notes ACM SM 02: a framework for the acquisition, processing and interactive display of high quality 3D models", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 50 p.  
    Citation
    Michael Gösele, Jan Kautz, Jochen Lang, Hendrik P. A. Lensch, and Hans-Peter Seidel, "Tutorial notes ACM SM 02: a framework for the acquisition, processing and interactive display of high quality 3D models", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 50 p.
    Abstract
    This tutorial highlights some recent results on the acquisition and interactive display of high quality 3D models. For further use in photorealistic rendering or interactive display, a high quality representation must capture two different things: the shape of the model represented as a geometric description of its surface and on the other hand the physical properties of the object. The physics of the material which an object is made of determine its appearance, e.g. the object's color, texture, deformation or reflection properties. The tutorial shows how computer vision and computer graphics techniques can be seamlessly integrated into a single framework for the acquisition, processing, and interactive display of high quality 3D models.
    Export
  • Fabrizio Grandoni, "Incrementally Maintaining the Number of l-cliques", in Research Report, (2002), pp. 10 p.  
    Citation
    Fabrizio Grandoni, "Incrementally Maintaining the Number of l-cliques", in Research Report, (2002), pp. 10 p.
    Abstract
    The main contribution of this paper is an incremental algorithm to update the number of $l$-cliques, for $l \geq 3$, in which each node of a graph is contained, after the deletion of an arbitrary node. The initialization cost is $O(n^{\omega p+q})$, where $n$ is the number of nodes, $p=\lfloor \frac{l}{3} \rfloor$, $q=l \pmod{3}$, and $\omega=\omega(1,1,1)$ is the exponent of the multiplication of two $n x n$ matrices. The amortized updating cost is $O(n^{q}T(n,p,\epsilon))$ for any $\epsilon \in [0,1]$, where $T(n,p,\epsilon)=\min\{n^{p-1}(n^{p(1+\epsilon)}+n^{p(\omega(1,\epsilon,1)-\epsilon)}),n^{p \omega(1,\frac{p-1}{p},1)}\}$ and $\omega(1,r,1)$ is the exponent of the multiplication of an $n x n^{r}$ matrix by an $n^{r} x n$ matrix. The current best bounds on $\omega(1,r,1)$ imply an $O(n^{2.376p+q})$ initialization cost, an $O(n^{2.575p+q-1})$ updating cost for $3 \leq l \leq 8$, and an $O(n^{2.376p+q-0.532})$ updating cost for $l \geq 9$. An interesting application to constraint programming is also considered.
    Export
  • Susan Hert, Tobias Polzin, Lutz Kettner, and Guido Schäfer, "Exp Lab: a tool set for computational experiments", in Research Report, (2002), pp. 59 p.  
    Citation
    Susan Hert, Tobias Polzin, Lutz Kettner, and Guido Schäfer, "Exp Lab: a tool set for computational experiments", in Research Report, (2002), pp. 59 p.
    Abstract
    We describe a set of tools that support the running, documentation, and evaluation of computational experiments. The tool set is designed not only to make computational experimentation easier but also to support good scientific practice by making results reproducable and more easily comparable to others' results by automatically documenting the experimental environment. The tools can be used separately or in concert and support all manner of experiments (\textit{i.e.}, any executable can be an experiment). The tools capitalize on the rich functionality available in Python to provide extreme flexibility and ease of use, but one need know nothing of Python to use the tools.
    Export
  • Martin Hoefer, "Performance of heuristic and approximation algorithms for the uncapacitated facility location problem", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 27 p.  
    Citation
    Martin Hoefer, "Performance of heuristic and approximation algorithms for the uncapacitated facility location problem", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 27 p.
    Abstract
    The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper, we compare five new approaches to this problem - the JMS- and the MYZ-approximation algorithms, a version of local search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare solution quality and running times on different standard benchmark instances. With these instances and additional material a web page was set up, where the material used in this study is accessible.
    Export
  • Irit Katriel, Peter Sanders, and Jesper Larsson Träff, "A practical minimum spanning tree algorithm using the cycle property", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 21 p.  
    Citation
    Irit Katriel, Peter Sanders, and Jesper Larsson Träff, "A practical minimum spanning tree algorithm using the cycle property", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 21 p.
    Abstract
    We present a simple new algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, ``difficult'' inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered to be impractical. An additional advantage is that the algorithm can greatly profit from pipelined memory access. Hence, an implementation on a vector machine is up to 13 times faster than previous algorithms. We outline additional refinements for MSTs of implicitly defined graphs and the use of the central data structure for querying the heaviest edge between two nodes in the MST. The latter result is also interesting for sparse graphs.
    Export
  • Tobias Polzin and Siavash Vahdati, "Using (sub)graphs of small width for solving the Steiner problem", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 9 p.  
    Citation
    Tobias Polzin and Siavash Vahdati, "Using (sub)graphs of small width for solving the Steiner problem", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 9 p.
    Abstract
    For the Steiner tree problem in networks, we present a practical algorithm that uses the fixed-parameter tractability of the problem with respect to a certain width parameter closely related to pathwidth. The running time of the algorithm is linear in the number of vertices when the pathwidth is constant. Combining this algorithm with our previous techniques, we can already profit from small width in subgraphs of an instance. Integrating this algorithm into our program package for the Steiner problem accelerates the solution process on some groups of instances and leads to a fast solution of some previously unsolved benchmark instances.
    Export
  • Peter Sanders and Jesper Larsson Träff, "The factor algorithm for all-to-all communication on clusters of SMP nodes", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 8 p.  
    Citation
    Peter Sanders and Jesper Larsson Träff, "The factor algorithm for all-to-all communication on clusters of SMP nodes", in Research Report / Max-Planck-Institut für Informatik, (2002), pp. 8 p.
    Abstract
    We present an algorithm for all-to-all personalized communication, in which every processor has an individual message to deliver to every other processor. The machine model we consider is a cluster of processing nodes where each node, possibly consisting of several processors, can participate in only one communication operation with another node at a time. The nodes may have different numbers of processors. This general model is important for the implementation of all-to-all communication in libraries such as MPI where collective communication may take place over arbitrary subsets of processors. The algorithm is simple and optimal up to an additive term that is small if the total number of processors is large compared to the maximal number of processors in a node.
    Export
  • MPI for Informatics, Max Planck Society (ed), "Fifth Biennial Report: June 1999 - August 2001", in Biennial Report / Max-Planck-Institut für Informatik, (2001), Vol. 5, pp. 278 p.  
    Citation
    MPI for Informatics, Max Planck Society (ed), "Fifth Biennial Report: June 1999 - August 2001", in Biennial Report / Max-Planck-Institut für Informatik, (2001), Vol. 5, pp. 278 p.
    Export
  • Henrik Björklund, Viktor Petersson, and Sergei Vorobyov, "Experiments with iterative improvement algorithms on completely unimodel hypercubes", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 25 p.  
    Citation
    Henrik Björklund, Viktor Petersson, and Sergei Vorobyov, "Experiments with iterative improvement algorithms on completely unimodel hypercubes", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 25 p.
    Export
  • Sung Woo Choi and Hans-Peter Seidel, "Linear one-sided stability of MAT for weakly injective domain", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 18 p.  
    Citation
    Sung Woo Choi and Hans-Peter Seidel, "Linear one-sided stability of MAT for weakly injective domain", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 18 p.
    Abstract
    Medial axis transform (MAT) is very sensitive to the noise, in the sense that, even if a shape is perturbed only slightly, the Hausdorff distance between the MATs of the original shape and the perturbed one may be large. But it turns out that MAT is stable, if we view this phenomenon with the one-sided Hausdorff distance, rather than with the two-sided Hausdorff distance. In this paper, we show that, if the original domain is weakly injective, which means that the MAT of the domain has no end point which is the center of an inscribed circle osculating the boundary at only one point, the one-sided Hausdorff distance of the original domain's MAT with respect to that of the perturbed one is bounded linearly with the Hausdorff distance of the perturbation. We also show by example that the linearity of this bound cannot be achieved for the domains which are not weakly injective. In particular, these results apply to the domains with the sharp corners, which were excluded in the past. One consequence of these results is that we can clarify theoretically the notion of extracting ``the essential part of the MAT'', which is the heart of the existing pruning methods.
    Export
  • Bela Csaba and Sachin Lodha, "A Randomized On-line Algorithm for the k-Server Problem on a Line", (DIMACS-Center for Discrete Mathematics & Theoretical Computer Science, Piscataway, NJ, 2001).  
    Citation
    Bela Csaba and Sachin Lodha, "A Randomized On-line Algorithm for the k-Server Problem on a Line", (DIMACS-Center for Discrete Mathematics & Theoretical Computer Science, Piscataway, NJ, 2001).
    Abstract
    We give a O(n^2 \over 3}\log{n})-competitive randomized k--server algorithm when the underlying metric space is given by n equally spaced points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is o(k)--competitive.
    Export
  • Katja Daubert, Wolfgang Heinrich, Jan Kautz, Jean-Michel Dischler, and Hans-Peter Seidel, "Efficient light transport using precomputed visibility", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 32 p.  
    Citation
    Katja Daubert, Wolfgang Heinrich, Jan Kautz, Jean-Michel Dischler, and Hans-Peter Seidel, "Efficient light transport using precomputed visibility", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 32 p.
    Abstract
    Visibility computations are the most time-consuming part of global illumination algorithms. The cost is amplified by the fact that quite often identical or similar information is recomputed multiple times. In particular this is the case when multiple images of the same scene are to be generated under varying lighting conditions and/or viewpoints. But even for a single image with static illumination, the computations could be accelerated by reusing visibility information for many different light paths. In this report we describe a general method of precomputing, storing, and reusing visibility information for light transport in a number of different types of scenes. In particular, we consider general parametric surfaces, triangle meshes without a global parameterization, and participating media. We also reorder the light transport in such a way that the visibility information is accessed in structured memory access patterns. This yields a method that is well suited for SIMD-style parallelization of the light transport, and can efficiently be implemented both in software and using graphics hardware. We finally demonstrate applications of the method to highly efficient precomputation of BRDFs, bidirectional texture functions, light fields, as well as near-interactive volume lighting.
    Export
  • Hans de Nivelle and Stephan Schulz, "Proceeding of the Second International Workshop of the Implementation of Logics", in Research Report, (2001), pp. 102 p.  
    Citation
    Hans de Nivelle and Stephan Schulz, "Proceeding of the Second International Workshop of the Implementation of Logics", in Research Report, (2001), pp. 102 p.
    Abstract
    The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object.
    Export
  • Susan Hert, Michael Hoffmann, Lutz Kettner, Sylvain Pion, and Michael Seel, "An adaptable and extensible geometry kernel", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 27 p.  
    Citation
    Susan Hert, Michael Hoffmann, Lutz Kettner, Sylvain Pion, and Michael Seel, "An adaptable and extensible geometry kernel", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 27 p.
    Abstract
    Geometric algorithms are based on geometric objects such as points, lines and circles. The term \textit{kernel\/} refers to a collection of representations for constant-size geometric objects and operations on these representations. This paper describes how such a geometry kernel can be designed and implemented in C++, having special emphasis on adaptability, extensibility and efficiency. We achieve these goals following the generic programming paradigm and using templates as our tools. These ideas are realized and tested in \cgal~\cite{svy-cgal}, the Computational Geometry Algorithms Library.
    Export
  • Piotr Krysta, "Approximating minimum size 1,2-connected networks", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 22 p.  
    Citation
    Piotr Krysta, "Approximating minimum size 1,2-connected networks", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 22 p.
    Abstract
    The problem of finding the minimum size 2-connected subgraph is a classical problem in network design. It is known to be NP-hard even on cubic planar graphs and Max-SNP hard. We study the generalization of this problem, where requirements of 1 or 2 edge or vertex disjoint paths are specified between every pair of vertices, and the aim is to find a minimum subgraph satisfying these requirements. For both problems we give $3/2$-approximation algorithms. This improves on the straightforward $2$-approximation algorithms for these problems, and generalizes earlier results for 2-connectivity. We also give analyses of the classical local optimization heuristics for these two network design problems.
    Export
  • Hendrik P. A. Lensch, Jan Kautz, Michael Gösele, Wolfgang Heidrich, and Hans-Peter Seidel, "Image-based reconstruction of spatially varying materials", in Research Report / Max-Planck-Institut für Informatik, (2001), pp. 20 p.