Research Reports of the Max Planck Institute for Informatics
2023
Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations
T. Fiebig
Technical Report, 2023
T. Fiebig
Technical Report, 2023
Export
BibTeX
@techreport{Fiebig_Report23,
TITLE = {Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations},
AUTHOR = {Fiebig, Tobias},
LANGUAGE = {eng},
DOI = {10.17617/2.3532055},
INSTITUTION = {Max Planck Society},
ADDRESS = {M{\"u}nchen},
YEAR = {2023},
MARGINALMARK = {$\bullet$},
}
Endnote
%0 Report
%A Fiebig, Tobias
%+ Internet Architecture, MPI for Informatics, Max Planck Society
%T Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations :
%G eng
%U http://hdl.handle.net/21.11116/0000-000D-C4C9-3
%R 10.17617/2.3532055
%Y Max Planck Society
%C München
%D 2023
%P 70 p.
2020
Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization
N. Qian, J. Wang, F. Mueller, F. Bernard, V. Golyanik and C. Theobalt
Technical Report, 2020
N. Qian, J. Wang, F. Mueller, F. Bernard, V. Golyanik and C. Theobalt
Technical Report, 2020
Abstract
3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To fill this gap, in this work<br>we present the first parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to define a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.
Export
BibTeX
@techreport{Qian_report2020,
TITLE = {Parametric Hand Texture Model for {3D} Hand Reconstruction and Personalization},
AUTHOR = {Qian, Neng and Wang, Jiayi and Mueller, Franziska and Bernard, Florian and Golyanik, Vladislav and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2020-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2020},
ABSTRACT = {3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To {fi}ll this gap, in this work<br>we present the {fi}rst parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to de{fi}ne a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Qian, Neng
%A Wang, Jiayi
%A Mueller, Franziska
%A Bernard, Florian
%A Golyanik, Vladislav
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Parametric Hand Texture Model for 3D Hand Reconstruction and
Personalization :
%G eng
%U http://hdl.handle.net/21.11116/0000-0006-9128-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2020
%P 37 p.
%X 3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To fill this gap, in this work<br>we present the first parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to define a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.
%K hand texture model, appearance modeling, hand tracking, 3D hand recon-
struction
%B Research Report
%@ false
2017
Live User-guided Intrinsic Video For Static Scenes
G. Fox, A. Meka, M. Zollhöfer, C. Richardt and C. Theobalt
Technical Report, 2017
G. Fox, A. Meka, M. Zollhöfer, C. Richardt and C. Theobalt
Technical Report, 2017
Abstract
We present a novel real-time approach for user-guided intrinsic decomposition
of static scenes captured by an RGB-D sensor. In the
first step, we acquire a three-dimensional representation of the scene
using a dense volumetric reconstruction framework. The obtained
reconstruction serves as a proxy to densely fuse reflectance estimates
and to store user-provided constraints in three-dimensional space.
User constraints, in the form of constant shading and reflectance
strokes, can be placed directly on the real-world geometry using
an intuitive touch-based interaction metaphor, or using interactive
mouse strokes. Fusing the decomposition results and constraints in
three-dimensional space allows for robust propagation of this information
to novel views by re-projection.We leverage this information
to improve on the decomposition quality of existing intrinsic video
decomposition techniques by further constraining the ill-posed decomposition
problem. In addition to improved decomposition quality,
we show a variety of live augmented reality applications such as
recoloring of objects, relighting of scenes and editing of material
appearance.
Export
BibTeX
@techreport{Report2017-4-001,
TITLE = {Live User-guided Intrinsic Video For Static Scenes},
AUTHOR = {Fox, Gereon and Meka, Abhimitra and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2017-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017},
ABSTRACT = {We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Fox, Gereon
%A Meka, Abhimitra
%A Zollhöfer, Michael
%A Richardt, Christian
%A Theobalt, Christian
%+ External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Live User-guided Intrinsic Video For Static Scenes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002C-5DA7-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2017
%P 12 p.
%X We present a novel real-time approach for user-guided intrinsic decomposition
of static scenes captured by an RGB-D sensor. In the
first step, we acquire a three-dimensional representation of the scene
using a dense volumetric reconstruction framework. The obtained
reconstruction serves as a proxy to densely fuse reflectance estimates
and to store user-provided constraints in three-dimensional space.
User constraints, in the form of constant shading and reflectance
strokes, can be placed directly on the real-world geometry using
an intuitive touch-based interaction metaphor, or using interactive
mouse strokes. Fusing the decomposition results and constraints in
three-dimensional space allows for robust propagation of this information
to novel views by re-projection.We leverage this information
to improve on the decomposition quality of existing intrinsic video
decomposition techniques by further constraining the ill-posed decomposition
problem. In addition to improved decomposition quality,
we show a variety of live augmented reality applications such as
recoloring of objects, relighting of scenes and editing of material
appearance.
%B Research Report
%@ false
Generating Semantic Aspects for Queries
D. Gupta, K. Berberich, J. Strötgen and D. Zeinalipour-Yazti
Technical Report, 2017
D. Gupta, K. Berberich, J. Strötgen and D. Zeinalipour-Yazti
Technical Report, 2017
Abstract
Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.
Export
BibTeX
@techreport{Guptareport2007,
TITLE = {Generating Semantic Aspects for Queries},
AUTHOR = {Gupta, Dhruv and Berberich, Klaus and Str{\"o}tgen, Jannik and Zeinalipour-Yazti, Demetrios},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2017-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017},
ABSTRACT = {Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Gupta, Dhruv
%A Berberich, Klaus
%A Strötgen, Jannik
%A Zeinalipour-Yazti, Demetrios
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Generating Semantic Aspects for Queries :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002E-07DD-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2017
%P 39 p.
%X Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.
%B Research Report
%@ false
WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor
S. Sridhar, A. Markussen, A. Oulasvirta, C. Theobalt and S. Boring
Technical Report, 2017
S. Sridhar, A. Markussen, A. Oulasvirta, C. Theobalt and S. Boring
Technical Report, 2017
Abstract
This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.
Export
BibTeX
@techreport{sridharwatch17,
TITLE = {{WatchSense}: On- and Above-Skin Input Sensing through a Wearable Depth Sensor},
AUTHOR = {Sridhar, Srinath and Markussen, Anders and Oulasvirta, Antti and Theobalt, Christian and Boring, Sebastian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017},
ABSTRACT = {This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Markussen, Anders
%A Oulasvirta, Antti
%A Theobalt, Christian
%A Boring, Sebastian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
%T WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002C-402E-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2017
%P 17 p.
%X This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.
%B Research Report
%@ false
2016
Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2016
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2016
Abstract
This paper provides a suite of optimization techniques for
the verification of safety properties of linear hybrid
automata with large discrete state spaces, such as
naturally arising when incorporating health state
monitoring and degradation levels into the controller
design. Such models can -- in contrast to purely functional
controller models -- not analyzed with hybrid verification
engines relying on explicit representations of modes, but
require fully symbolic representations for both the
continuous and discrete part of the state space. The
optimization techniques shown yield consistently a speedup
of about 20 against previously published results for a
similar benchmark suite, and complement these with new
results on counterexample guided abstraction refinement. In
combination with the methods guaranteeing preciseness of
abstractions, this allows to significantly extend the class
of models for which safety can be established, covering in
particular models with 23 continuous variables and 2 to the
71 discrete states, 20 continuous variables and 2 to the
199 discrete states, and 9 continuous variables and 2 to
the 271 discrete states.
Export
BibTeX
@techreport{AlthausBeberDammEtAl2016ATR,
TITLE = {Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization},
AUTHOR = {Althaus, Ernst and Beber, Bj{\"o}rn and Damm, Werner and Disch, Stefan and Hagemann, Willem and Rakow, Astrid and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR103},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2016},
DATE = {2016},
ABSTRACT = {This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.},
TYPE = {AVACS Technical Report},
VOLUME = {103},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Beber, Björn
%A Damm, Werner
%A Disch, Stefan
%A Hagemann, Willem
%A Rakow, Astrid
%A Scholl, Christoph
%A Waldmann, Uwe
%A Wirtz, Boris
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
%T Verification of Linear Hybrid Systems with Large Discrete
State Spaces: Exploring the Design Space for Optimization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002C-4540-0
%Y SFB/TR 14 AVACS
%D 2016
%P 93 p.
%X This paper provides a suite of optimization techniques for
the verification of safety properties of linear hybrid
automata with large discrete state spaces, such as
naturally arising when incorporating health state
monitoring and degradation levels into the controller
design. Such models can -- in contrast to purely functional
controller models -- not analyzed with hybrid verification
engines relying on explicit representations of modes, but
require fully symbolic representations for both the
continuous and discrete part of the state space. The
optimization techniques shown yield consistently a speedup
of about 20 against previously published results for a
similar benchmark suite, and complement these with new
results on counterexample guided abstraction refinement. In
combination with the methods guaranteeing preciseness of
abstractions, this allows to significantly extend the class
of models for which safety can be established, covering in
particular models with 23 continuous variables and 2 to the
71 discrete states, 20 continuous variables and 2 to the
199 discrete states, and 9 continuous variables and 2 to
the 271 discrete states.
%B AVACS Technical Report
%N 103
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_103.pdf
Diversifying Search Results Using Time
D. Gupta and K. Berberich
Technical Report, 2016
D. Gupta and K. Berberich
Technical Report, 2016
Abstract
Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore
longitudinal document collections by querying for entities or events without knowing associated important dates apriori.
In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their
contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of
identified time intervals. We present a novel and objective evaluation for our proposed
approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present
search results diversified along time.
Export
BibTeX
@techreport{GuptaReport2016-5-001,
TITLE = {Diversifying Search Results Using Time},
AUTHOR = {Gupta, Dhruv and Berberich, Klaus},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore longitudinal document collections by querying for entities or events without knowing associated important dates apriori. In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of identified time intervals. We present a novel and objective evaluation for our proposed approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present search results diversified along time.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Gupta, Dhruv
%A Berberich, Klaus
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Diversifying Search Results Using Time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002A-0AA4-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 51 p.
%X Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore
longitudinal document collections by querying for entities or events without knowing associated important dates apriori.
In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their
contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of
identified time intervals. We present a novel and objective evaluation for our proposed
approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present
search results diversified along time.
%B Research Report
%@ false
Leveraging Semantic Annotations to Link Wikipedia and News Archives
A. Mishra and K. Berberich
Technical Report, 2016
A. Mishra and K. Berberich
Technical Report, 2016
Abstract
The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them.
To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.
Export
BibTeX
@techreport{MishraBerberich16,
TITLE = {Leveraging Semantic Annotations to Link Wikipedia and News Archives},
AUTHOR = {Mishra, Arunav and Berberich, Klaus},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them. To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Mishra, Arunav
%A Berberich, Klaus
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Leveraging Semantic Annotations to Link Wikipedia and News Archives :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0029-5FF0-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 21 p.
%X The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them.
To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.
%B Research Reports
%@ false
Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input
S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta and C. Theobalt
Technical Report, 2016
S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta and C. Theobalt
Technical Report, 2016
Abstract
Real-time simultaneous tracking of hands manipulating and interacting with
external objects has many potential applications in augmented reality, tangible
computing, and wearable computing. However, due to dicult occlusions,
fast motions, and uniform hand appearance, jointly tracking hand and object
pose is more challenging than tracking either of the two separately. Many
previous approaches resort to complex multi-camera setups to remedy the occlusion
problem and often employ expensive segmentation and optimization
steps which makes real-time tracking impossible. In this paper, we propose
a real-time solution that uses a single commodity RGB-D camera. The core
of our approach is a 3D articulated Gaussian mixture alignment strategy tailored
to hand-object tracking that allows fast pose optimization. The alignment
energy uses novel regularizers to address occlusions and hand-object
contacts. For added robustness, we guide the optimization with discriminative
part classication of the hand and segmentation of the object. We
conducted extensive experiments on several existing datasets and introduce
a new annotated hand-object dataset. Quantitative and qualitative results
show the key advantages of our method: speed, accuracy, and robustness.
Export
BibTeX
@techreport{Report2016-4-001,
TITLE = {Real-time Joint Tracking of a Hand Manipulating an Object from {RGB-D} Input},
AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Zollh{\"o}fer, Michael and Casas, Dan and Oulasvirta, Antti and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Mueller, Franziska
%A Zollhöfer, Michael
%A Casas, Dan
%A Oulasvirta, Antti
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002B-5510-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 31 p.
%X Real-time simultaneous tracking of hands manipulating and interacting with
external objects has many potential applications in augmented reality, tangible
computing, and wearable computing. However, due to dicult occlusions,
fast motions, and uniform hand appearance, jointly tracking hand and object
pose is more challenging than tracking either of the two separately. Many
previous approaches resort to complex multi-camera setups to remedy the occlusion
problem and often employ expensive segmentation and optimization
steps which makes real-time tracking impossible. In this paper, we propose
a real-time solution that uses a single commodity RGB-D camera. The core
of our approach is a 3D articulated Gaussian mixture alignment strategy tailored
to hand-object tracking that allows fast pose optimization. The alignment
energy uses novel regularizers to address occlusions and hand-object
contacts. For added robustness, we guide the optimization with discriminative
part classication of the hand and segmentation of the object. We
conducted extensive experiments on several existing datasets and introduce
a new annotated hand-object dataset. Quantitative and qualitative results
show the key advantages of our method: speed, accuracy, and robustness.
%B Research Report
%@ false
FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction
S. Sridhar, G. Bailly, E. Heydrich, A. Oulasvirta and C. Theobalt
Technical Report, 2016
S. Sridhar, G. Bailly, E. Heydrich, A. Oulasvirta and C. Theobalt
Technical Report, 2016
Abstract
This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.
Export
BibTeX
@techreport{Report2016-4-002,
TITLE = {{FullHand}: {M}arkerless Skeleton-based Tracking for Free-Hand Interaction},
AUTHOR = {Sridhar, Srinath and Bailly, Gilles and Heydrich, Elias and Oulasvirta, Antti and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Bailly, Gilles
%A Heydrich, Elias
%A Oulasvirta, Antti
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002B-7456-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 11 p.
%X This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.
%B Research Report
%@ false
2015
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
Abstract
Head-mounted eye tracking has significant potential for
mobile gaze-based interaction with ambient displays but current
interfaces lack information about the tracker\'s gaze estimation error.
Consequently, current interfaces do not exploit the full potential of
gaze input as the inherent estimation error can not be dealt with. The
error depends on the physical properties of the display and constantly
varies with changes in position and distance of the user to the display.
In this work we present a computational model of gaze estimation error
for head-mounted eye trackers. Our model covers the full processing
pipeline for mobile gaze estimation, namely mapping of pupil positions
to scene camera coordinates, marker-based display detection, and display
mapping. We build the model based on a series of controlled measurements
of a sample state-of-the-art monocular head-mounted eye tracker. Results
show that our model can predict gaze estimation error with a root mean
squared error of 17.99~px ($1.96^\\circ$).
Export
BibTeX
@techreport{Barz_Rep15,
TITLE = {Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers},
AUTHOR = {Barz, Michael and Bulling, Andreas and Daiber, Florian},
LANGUAGE = {eng},
URL = {https://perceptual.mpi-inf.mpg.de/files/2015/01/gazequality.pdf},
NUMBER = {15-01},
INSTITUTION = {DFKI},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2015},
ABSTRACT = {Head-mounted eye tracking has significant potential for mobile gaze-based interaction with ambient displays but current interfaces lack information about the tracker\'s gaze estimation error. Consequently, current interfaces do not exploit the full potential of gaze input as the inherent estimation error can not be dealt with. The error depends on the physical properties of the display and constantly varies with changes in position and distance of the user to the display. In this work we present a computational model of gaze estimation error for head-mounted eye trackers. Our model covers the full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera coordinates, marker-based display detection, and display mapping. We build the model based on a series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker. Results show that our model can predict gaze estimation error with a root mean squared error of 17.99~px ($1.96^\\circ$).},
TYPE = {DFKI Research Report},
}
Endnote
%0 Report
%A Barz, Michael
%A Bulling, Andreas
%A Daiber, Florian
%+ External Organizations
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society
External Organizations
%T Computational Modelling and Prediction of Gaze Estimation Error
for Head-mounted Eye Trackers :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B972-8
%U https://perceptual.mpi-inf.mpg.de/files/2015/01/gazequality.pdf
%Y DFKI
%C Saarbrücken
%D 2015
%8 01.01.2015
%P 10 p.
%X Head-mounted eye tracking has significant potential for
mobile gaze-based interaction with ambient displays but current
interfaces lack information about the tracker\'s gaze estimation error.
Consequently, current interfaces do not exploit the full potential of
gaze input as the inherent estimation error can not be dealt with. The
error depends on the physical properties of the display and constantly
varies with changes in position and distance of the user to the display.
In this work we present a computational model of gaze estimation error
for head-mounted eye trackers. Our model covers the full processing
pipeline for mobile gaze estimation, namely mapping of pupil positions
to scene camera coordinates, marker-based display detection, and display
mapping. We build the model based on a series of controlled measurements
of a sample state-of-the-art monocular head-mounted eye tracker. Results
show that our model can predict gaze estimation error with a root mean
squared error of 17.99~px ($1.96^\\circ$).
%B DFKI Research Report
%U http://www.dfki.de/web/forschung/publikationen/renameFileForDownload?filename=gazequality.pdf&file_id=uploads_2388
Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata
W. Damm, M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2015
W. Damm, M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2015
Export
BibTeX
@techreport{atr111,
TITLE = {Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata},
AUTHOR = {Damm, Werner and Horbach, Matthias and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR111},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2015},
TYPE = {AVACS Technical Report},
VOLUME = {111},
}
Endnote
%0 Report
%A Damm, Werner
%A Horbach, Matthias
%A Sofronie-Stokkermans, Viorica
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002A-0805-6
%Y SFB/TR 14 AVACS
%D 2015
%P 52 p.
%B AVACS Technical Report
%N 111
%@ false
GazeProjector: Location-independent Gaze Interaction on and Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
Abstract
Mobile gaze-based interaction with multiple displays may
occur from arbitrary positions and orientations. However, maintaining
high gaze estimation accuracy still represents a significant challenge.
To address this, we present GazeProjector, a system that combines
accurate point-of-gaze estimation with natural feature tracking on
displays to determine the mobile eye tracker’s position relative to a
display. The detected eye positions are transformed onto that display
allowing for gaze-based interaction. This allows for seamless gaze
estimation and interaction on (1) multiple displays of arbitrary sizes,
(2) independently of the user’s position and orientation to the display.
In a user study with 12 participants we compared GazeProjector to
existing well- established methods such as visual on-screen markers and
a state-of-the-art motion capture system. Our results show that our
approach is robust to varying head poses, orientations, and distances to
the display, while still providing high gaze estimation accuracy across
multiple displays without re-calibration. The system represents an
important step towards the vision of pervasive gaze-based interfaces.
Export
BibTeX
@techreport{Lander_Rep15,
TITLE = {{GazeProjector}: Location-independent Gaze Interaction on and Across Multiple Displays},
AUTHOR = {Lander, Christian and Gehring, Sven and Kr{\"u}ger, Antonio and Boring, Sebastian and Bulling, Andreas},
LANGUAGE = {eng},
URL = {http://www.dfki.de/web/research/publications?pubid=7618},
NUMBER = {15-01},
INSTITUTION = {DFKI},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2015},
ABSTRACT = {Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a significant challenge. To address this, we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye tracker{\textquoteright}s position relative to a display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation and interaction on (1) multiple displays of arbitrary sizes, (2) independently of the user{\textquoteright}s position and orientation to the display. In a user study with 12 participants we compared GazeProjector to existing well- established methods such as visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the vision of pervasive gaze-based interfaces.},
TYPE = {DFKI Research Report},
}
Endnote
%0 Report
%A Lander, Christian
%A Gehring, Sven
%A Krüger, Antonio
%A Boring, Sebastian
%A Bulling, Andreas
%+ External Organizations
External Organizations
External Organizations
External Organizations
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society
%T GazeProjector: Location-independent Gaze Interaction on and
Across Multiple Displays :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B947-A
%U http://www.dfki.de/web/research/publications?pubid=7618
%Y DFKI
%C Saarbrücken
%D 2015
%8 01.01.2015
%P 10 p.
%X Mobile gaze-based interaction with multiple displays may
occur from arbitrary positions and orientations. However, maintaining
high gaze estimation accuracy still represents a significant challenge.
To address this, we present GazeProjector, a system that combines
accurate point-of-gaze estimation with natural feature tracking on
displays to determine the mobile eye tracker’s position relative to a
display. The detected eye positions are transformed onto that display
allowing for gaze-based interaction. This allows for seamless gaze
estimation and interaction on (1) multiple displays of arbitrary sizes,
(2) independently of the user’s position and orientation to the display.
In a user study with 12 participants we compared GazeProjector to
existing well- established methods such as visual on-screen markers and
a state-of-the-art motion capture system. Our results show that our
approach is robust to varying head poses, orientations, and distances to
the display, while still providing high gaze estimation accuracy across
multiple displays without re-calibration. The system represents an
important step towards the vision of pervasive gaze-based interfaces.
%B DFKI Research Report
Modal Tableau Systems with Blocking and Congruence Closure
R. A. Schmidt and U. Waldmann
Technical Report, 2015
R. A. Schmidt and U. Waldmann
Technical Report, 2015
Export
BibTeX
@techreport{SchmidtTR2015,
TITLE = {Modal Tableau Systems with Blocking and Congruence Closure},
AUTHOR = {Schmidt, Renate A. and Waldmann, Uwe},
LANGUAGE = {eng},
NUMBER = {uk-ac-man-scw:268816},
INSTITUTION = {University of Manchester},
ADDRESS = {Manchester},
YEAR = {2015},
TYPE = {eScholar},
}
Endnote
%0 Report
%A Schmidt, Renate A.
%A Waldmann, Uwe
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Modal Tableau Systems with Blocking and Congruence Closure :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002A-08BC-A
%Y University of Manchester
%C Manchester
%D 2015
%P 22 p.
%B eScholar
%U https://www.escholar.manchester.ac.uk/uk-ac-man-scw:268816https://www.research.manchester.ac.uk/portal/files/32297317/FULL_TEXT.PDF
2014
Phrase Query Optimization on Inverted Indexes
A. Anand, I. Mele, S. Bedathur and K. Berberich
Technical Report, 2014
A. Anand, I. Mele, S. Bedathur and K. Berberich
Technical Report, 2014
Abstract
Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered.
We consider an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.
Export
BibTeX
@techreport{AnandMeleBedathurBerberich2014,
TITLE = {Phrase Query Optimization on Inverted Indexes},
AUTHOR = {Anand, Avishek and Mele, Ida and Bedathur, Srikanta and Berberich, Klaus},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
ABSTRACT = {Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered. We consider an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Anand, Avishek
%A Mele, Ida
%A Bedathur, Srikanta
%A Berberich, Klaus
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Phrase Query Optimization on Inverted Indexes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-022A-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2014
%P 20 p.
%X Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered.
We consider an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.
%B Research Report
%@ false
Learning Tuple Probabilities in Probabilistic Databases
M. Dylla and M. Theobald
Technical Report, 2014
M. Dylla and M. Theobald
Technical Report, 2014
Abstract
Learning the parameters of complex probabilistic-relational models from labeled
training data is a standard technique in machine learning, which has been
intensively studied in the subfield of Statistical Relational Learning (SRL),
but---so far---this is still an under-investigated topic in the context of
Probabilistic Databases (PDBs). In this paper, we focus on learning the
probability values of base tuples in a PDB from query answers, the latter of
which are represented as labeled lineage formulas. Specifically, we consider
labels in the form of pairs, each consisting of a Boolean lineage formula and a
marginal probability that comes attached to the corresponding query answer. The
resulting learning problem can be viewed as the inverse problem to confidence
computations in PDBs: given a set of labeled query answers, learn the
probability values of the base tuples, such that the marginal probabilities of
the query answers again yield in the assigned probability labels. We analyze
the learning problem from a theoretical perspective, devise two
optimization-based objectives, and provide an efficient algorithm (based on
Stochastic Gradient Descent) for solving these objectives. Finally, we conclude
this work by an experimental evaluation on three real-world and one synthetic
dataset, while competing with various techniques from SRL, reasoning in
information extraction, and optimization.
Export
BibTeX
@techreport{Dylla-Learning2014,
TITLE = {Learning Tuple Probabilities in Probabilistic Databases},
AUTHOR = {Dylla, Maximilian and Theobald, Martin},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
ABSTRACT = {Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from query answers, the latter of which are represented as labeled lineage formulas. Specifically, we consider labels in the form of pairs, each consisting of a Boolean lineage formula and a marginal probability that comes attached to the corresponding query answer. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, devise two optimization-based objectives, and provide an efficient algorithm (based on Stochastic Gradient Descent) for solving these objectives. Finally, we conclude this work by an experimental evaluation on three real-world and one synthetic dataset, while competing with various techniques from SRL, reasoning in information extraction, and optimization.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Dylla, Maximilian
%A Theobald, Martin
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Learning Tuple Probabilities in Probabilistic Databases :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-8492-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2014
%P 51 p.
%X Learning the parameters of complex probabilistic-relational models from labeled
training data is a standard technique in machine learning, which has been
intensively studied in the subfield of Statistical Relational Learning (SRL),
but---so far---this is still an under-investigated topic in the context of
Probabilistic Databases (PDBs). In this paper, we focus on learning the
probability values of base tuples in a PDB from query answers, the latter of
which are represented as labeled lineage formulas. Specifically, we consider
labels in the form of pairs, each consisting of a Boolean lineage formula and a
marginal probability that comes attached to the corresponding query answer. The
resulting learning problem can be viewed as the inverse problem to confidence
computations in PDBs: given a set of labeled query answers, learn the
probability values of the base tuples, such that the marginal probabilities of
the query answers again yield in the assigned probability labels. We analyze
the learning problem from a theoretical perspective, devise two
optimization-based objectives, and provide an efficient algorithm (based on
Stochastic Gradient Descent) for solving these objectives. Finally, we conclude
this work by an experimental evaluation on three real-world and one synthetic
dataset, while competing with various techniques from SRL, reasoning in
information extraction, and optimization.
%B Research Report
%@ false
Obtaining Finite Local Theory Axiomatizations via Saturation
M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2014
M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2014
Abstract
In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
Export
BibTeX
@techreport{atr093,
TITLE = {Obtaining Finite Local Theory Axiomatizations via Saturation},
AUTHOR = {Horbach, Matthias and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR93},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2014},
ABSTRACT = {In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification.},
TYPE = {AVACS Technical Report},
VOLUME = {93},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Obtaining Finite Local Theory Axiomatizations via Saturation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-C90C-F
%Y SFB/TR 14 AVACS
%D 2014
%P 26 p.
%X In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
%B AVACS Technical Report
%N 93
%@ false
%U http://www.avacs.org/Publikationen/Open/avacs_technical_report_093.pdf
Local High-order Regularization on Data Manifolds
K. I. Kim, J. Tompkin and C. Theobalt
Technical Report, 2014
K. I. Kim, J. Tompkin and C. Theobalt
Technical Report, 2014
Export
BibTeX
@techreport{KimTR2014,
TITLE = {Local High-order Regularization on Data Manifolds},
AUTHOR = {Kim, Kwang In and Tompkin, James and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-4-001},
INSTITUTION = {Max-Planck Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kim, Kwang In
%A Tompkin, James
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Local High-order Regularization on Data Manifolds :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B210-7
%Y Max-Planck Institut für Informatik
%C Saarbrücken
%D 2014
%P 12 p.
%B Research Report
%@ false
Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera
S. Sridhar, A. Oulasvirta and C. Theobalt
Technical Report, 2014
S. Sridhar, A. Oulasvirta and C. Theobalt
Technical Report, 2014
Export
BibTeX
@techreport{Sridhar2014,
TITLE = {Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera},
AUTHOR = {Sridhar, Srinath and Oulasvirta, Antti and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Oulasvirta, Antti
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B5B8-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2014
%P 14 p.
%B Research Report
%@ false
2013
Hierarchic Superposition with Weak Abstraction
P. Baumgartner and U. Waldmann
Technical Report, 2013
P. Baumgartner and U. Waldmann
Technical Report, 2013
Abstract
Many applications of automated deduction require reasoning in
first-order logic modulo background theories, in particular some
form of integer arithmetic. A major unsolved research challenge
is to design theorem provers that are "reasonably complete"
even in the presence of free function symbols ranging into a
background theory sort. The hierarchic superposition calculus
of Bachmair, Ganzinger, and Waldmann already supports such
symbols, but, as we demonstrate, not optimally. This paper aims
to rectify the situation by introducing a novel form of clause
abstraction, a core component in the hierarchic superposition
calculus for transforming clauses into a form needed for internal
operation. We argue for the benefits of the resulting calculus
and provide a new completeness result for the fragment where
all background-sorted terms are ground.
Export
BibTeX
@techreport{Waldmann2013,
TITLE = {Hierarchic Superposition with Weak Abstraction},
AUTHOR = {Baumgartner, Peter and Waldmann, Uwe},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-RG1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2013},
ABSTRACT = {Many applications of automated deduction require reasoning in first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are "reasonably complete" even in the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core component in the hierarchic superposition calculus for transforming clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide a new completeness result for the fragment where all background-sorted terms are ground.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Baumgartner, Peter
%A Waldmann, Uwe
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Hierarchic Superposition with Weak Abstraction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03A8-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2013
%P 45 p.
%X Many applications of automated deduction require reasoning in
first-order logic modulo background theories, in particular some
form of integer arithmetic. A major unsolved research challenge
is to design theorem provers that are "reasonably complete"
even in the presence of free function symbols ranging into a
background theory sort. The hierarchic superposition calculus
of Bachmair, Ganzinger, and Waldmann already supports such
symbols, but, as we demonstrate, not optimally. This paper aims
to rectify the situation by introducing a novel form of clause
abstraction, a core component in the hierarchic superposition
calculus for transforming clauses into a form needed for internal
operation. We argue for the benefits of the resulting calculus
and provide a new completeness result for the fragment where
all background-sorted terms are ground.
%B Research Report
%@ false
New Results for Non-preemptive Speed Scaling
C.-C. Huang and S. Ott
Technical Report, 2013
C.-C. Huang and S. Ott
Technical Report, 2013
Abstract
We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption.
The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$.
The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).
Export
BibTeX
@techreport{HuangOtt2013,
TITLE = {New Results for Non-preemptive Speed Scaling},
AUTHOR = {Huang, Chien-Chung and Ott, Sebastian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2013-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2013},
ABSTRACT = {We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Huang, Chien-Chung
%A Ott, Sebastian
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New Results for Non-preemptive Speed Scaling :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03BF-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2013
%P 32 p.
%X We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption.
The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$.
The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).
%B Research Reports
%@ false
A Distributed Algorithm for Large-scale Generalized Matching
F. Makari, B. Awerbuch, R. Gemulla, R. Khandekar, J. Mestre and M. Sozio
Technical Report, 2013
F. Makari, B. Awerbuch, R. Gemulla, R. Khandekar, J. Mestre and M. Sozio
Technical Report, 2013
Abstract
Generalized matching problems arise in a number of applications, including
computational advertising, recommender systems, and trade markets. Consider,
for example, the problem of recommending multimedia items (e.g., DVDs) to
users such that (1) users are recommended items that they are likely to be
interested in, (2) every user gets neither too few nor too many
recommendations, and (3) only items available in stock are recommended to
users. State-of-the-art matching algorithms fail at coping with large
real-world instances, which may involve millions of users and items. We
propose the first distributed algorithm for computing near-optimal solutions
to large-scale generalized matching problems like the one above. Our algorithm
is designed to run on a small cluster of commodity nodes (or in a MapReduce
environment), has strong approximation guarantees, and requires only a
poly-logarithmic number of passes over the input. In particular, we propose a
novel distributed algorithm to approximately solve mixed packing-covering
linear programs, which include but are not limited to generalized matching
problems. Experiments on real-world and synthetic data suggest that our
algorithm scales to very large problem sizes and can be orders of magnitude
faster than alternative approaches.
Export
BibTeX
@techreport{MakariAwerbuchGemullaKhandekarMestreSozio2013,
TITLE = {A Distributed Algorithm for Large-scale Generalized Matching},
AUTHOR = {Makari, Faraz and Awerbuch, Baruch and Gemulla, Rainer and Khandekar, Rohit and Mestre, Julian and Sozio, Mauro},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2013-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2013},
ABSTRACT = {Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3) only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Makari, Faraz
%A Awerbuch, Baruch
%A Gemulla, Rainer
%A Khandekar, Rohit
%A Mestre, Julian
%A Sozio, Mauro
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
Databases and Information Systems, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Distributed Algorithm for Large-scale Generalized Matching :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03B4-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2013
%P 39 p.
%X Generalized matching problems arise in a number of applications, including
computational advertising, recommender systems, and trade markets. Consider,
for example, the problem of recommending multimedia items (e.g., DVDs) to
users such that (1) users are recommended items that they are likely to be
interested in, (2) every user gets neither too few nor too many
recommendations, and (3) only items available in stock are recommended to
users. State-of-the-art matching algorithms fail at coping with large
real-world instances, which may involve millions of users and items. We
propose the first distributed algorithm for computing near-optimal solutions
to large-scale generalized matching problems like the one above. Our algorithm
is designed to run on a small cluster of commodity nodes (or in a MapReduce
environment), has strong approximation guarantees, and requires only a
poly-logarithmic number of passes over the input. In particular, we propose a
novel distributed algorithm to approximately solve mixed packing-covering
linear programs, which include but are not limited to generalized matching
problems. Experiments on real-world and synthetic data suggest that our
algorithm scales to very large problem sizes and can be orders of magnitude
faster than alternative approaches.
%B Research Reports
%@ false
2012
Building and Maintaining Halls of Fame Over a Database
F. Alvanaki, S. Michel and A. Stupar
Technical Report, 2012
F. Alvanaki, S. Michel and A. Stupar
Technical Report, 2012
Abstract
Halls of Fame are fascinating constructs. They represent the elite of an often
very large amount of entities|persons, companies, products, countries etc.
Beyond their practical use as static rankings, changes to them are particularly
interesting|for decision making processes, as input to common media or
novel narrative science applications, or simply consumed by users. In this
work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and
data of a database can be used to generate Halls of Fame. In this database
scenario, by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We dene every Hall of
Fame as one specic instance of an SQL query, such that a change in its
result is considered a noteworthy event. Identied changes (i.e., events) are
ranked using lexicographic tradeos over event and query properties and
presented to users or fed in higher-level applications. We have implemented
a full-edged prototype system that uses either database triggers or a Java
based middleware for event identication. We report on an experimental
evaluation using a real-world dataset of basketball statistics.
Export
BibTeX
@techreport{AlvanakiMichelStupar2012,
TITLE = {Building and Maintaining Halls of Fame Over a Database},
AUTHOR = {Alvanaki, Foteini and Michel, Sebastian and Stupar, Aleksandar},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-004},
INSTITUTION = {Max-Plankc-Institute f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {Halls of Fame are fascinating constructs. They represent the elite of an often very large amount of entities|persons, companies, products, countries etc. Beyond their practical use as static rankings, changes to them are particularly interesting|for decision making processes, as input to common media or novel narrative science applications, or simply consumed by users. In this work, we aim at detecting events that can be characterized by changes to a Hall of Fame ranking in an automated way. We describe how the schema and data of a database can be used to generate Halls of Fame. In this database scenario, by Hall of Fame we refer to distinguished tuples; entities, whose characteristics set them apart from the majority. We dene every Hall of Fame as one specic instance of an SQL query, such that a change in its result is considered a noteworthy event. Identied changes (i.e., events) are ranked using lexicographic tradeos over event and query properties and presented to users or fed in higher-level applications. We have implemented a full-edged prototype system that uses either database triggers or a Java based middleware for event identication. We report on an experimental evaluation using a real-world dataset of basketball statistics.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Alvanaki, Foteini
%A Michel, Sebastian
%A Stupar, Aleksandar
%+ Cluster of Excellence Multimodal Computing and Interaction
Databases and Information Systems, MPI for Informatics, Max Planck Society
Cluster of Excellence Multimodal Computing and Interaction
%T Building and Maintaining Halls of Fame Over a Database :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03E9-D
%Y Max-Plankc-Institute für Informatik
%C Saarbrücken
%D 2012
%X Halls of Fame are fascinating constructs. They represent the elite of an often
very large amount of entities|persons, companies, products, countries etc.
Beyond their practical use as static rankings, changes to them are particularly
interesting|for decision making processes, as input to common media or
novel narrative science applications, or simply consumed by users. In this
work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and
data of a database can be used to generate Halls of Fame. In this database
scenario, by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We dene every Hall of
Fame as one specic instance of an SQL query, such that a change in its
result is considered a noteworthy event. Identied changes (i.e., events) are
ranked using lexicographic tradeos over event and query properties and
presented to users or fed in higher-level applications. We have implemented
a full-edged prototype system that uses either database triggers or a Java
based middleware for event identication. We report on an experimental
evaluation using a real-world dataset of basketball statistics.
%B Research Reports
%@ false
Computing n-Gram Statistics in MapReduce
K. Berberich and S. Bedathur
Technical Report, 2012
K. Berberich and S. Bedathur
Technical Report, 2012
Export
BibTeX
@techreport{BerberichBedathur2012,
TITLE = {Computing n--Gram Statistics in {MapReduce}},
AUTHOR = {Berberich, Klaus and Bedathur, Srikanta},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saa},
YEAR = {2012},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Klaus
%A Bedathur, Srikanta
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Computing n-Gram Statistics in MapReduce :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-0416-A
%Y Max-Planck-Institut für Informatik
%C Saa
%D 2012
%P 39 p.
%B Research Report
%@ false
Top-k Query Processing in Probabilistic Databases with Non-materialized Views
M. Dylla, I. Miliaraki and M. Theobald
Technical Report, 2012
M. Dylla, I. Miliaraki and M. Theobald
Technical Report, 2012
Export
BibTeX
@techreport{DyllaTopk2012,
TITLE = {Top-k Query Processing in Probabilistic Databases with Non-materialized Views},
AUTHOR = {Dylla, Maximilian and Miliaraki, Iris and Theobald, Martin},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-002},
LOCALID = {Local-ID: 62EC1C9C96B8EFF4C1257B560029F18C-DyllaTopk2012},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
DATE = {2012},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Dylla, Maximilian
%A Miliaraki, Iris
%A Theobald, Martin
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Top-k Query Processing in Probabilistic Databases with Non-materialized Views :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B02F-2
%F OTHER: Local-ID: 62EC1C9C96B8EFF4C1257B560029F18C-DyllaTopk2012
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%B Research Report
%@ false
Automatic Generation of Invariants for Circular Derivations in SUP(LA) 1
A. Fietzke, E. Kruglov and C. Weidenbach
Technical Report, 2012
A. Fietzke, E. Kruglov and C. Weidenbach
Technical Report, 2012
Abstract
The hierarchic combination of linear arithmetic and firstorder
logic with free function symbols, FOL(LA), results in a strictly
more expressive logic than its two parts. The SUP(LA) calculus can be
turned into a decision procedure for interesting fragments of FOL(LA).
For example, reachability problems for timed automata can be decided
by SUP(LA) using an appropriate translation into FOL(LA). In this paper,
we extend the SUP(LA) calculus with an additional inference rule,
automatically generating inductive invariants from partial SUP(LA)
derivations. The rule enables decidability of more expressive fragments,
including reachability for timed automata with unbounded integer variables.
We have implemented the rule in the SPASS(LA) theorem prover
with promising results, showing that it can considerably speed up proof
search and enable termination of saturation for practically relevant
problems.
Export
BibTeX
@techreport{FietzkeKruglovWeidenbach2012,
TITLE = {Automatic Generation of Invariants for Circular Derivations in {SUP(LA)} 1},
AUTHOR = {Fietzke, Arnaud and Kruglov, Evgeny and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-RG1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {The hierarchic combination of linear arithmetic and firstorder logic with free function symbols, FOL(LA), results in a strictly more expressive logic than its two parts. The SUP(LA) calculus can be turned into a decision procedure for interesting fragments of FOL(LA). For example, reachability problems for timed automata can be decided by SUP(LA) using an appropriate translation into FOL(LA). In this paper, we extend the SUP(LA) calculus with an additional inference rule, automatically generating inductive invariants from partial SUP(LA) derivations. The rule enables decidability of more expressive fragments, including reachability for timed automata with unbounded integer variables. We have implemented the rule in the SPASS(LA) theorem prover with promising results, showing that it can considerably speed up proof search and enable termination of saturation for practically relevant problems.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Fietzke, Arnaud
%A Kruglov, Evgeny
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Automatic Generation of Invariants for Circular Derivations in SUP(LA) 1 :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03CF-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%P 26 p.
%X The hierarchic combination of linear arithmetic and firstorder
logic with free function symbols, FOL(LA), results in a strictly
more expressive logic than its two parts. The SUP(LA) calculus can be
turned into a decision procedure for interesting fragments of FOL(LA).
For example, reachability problems for timed automata can be decided
by SUP(LA) using an appropriate translation into FOL(LA). In this paper,
we extend the SUP(LA) calculus with an additional inference rule,
automatically generating inductive invariants from partial SUP(LA)
derivations. The rule enables decidability of more expressive fragments,
including reachability for timed automata with unbounded integer variables.
We have implemented the rule in the SPASS(LA) theorem prover
with promising results, showing that it can considerably speed up proof
search and enable termination of saturation for practically relevant
problems.
%B Research Report
%@ false
Symmetry Detection in Large Scale City Scans
J. Kerber, M. Wand, M. Bokeloh and H.-P. Seidel
Technical Report, 2012
J. Kerber, M. Wand, M. Bokeloh and H.-P. Seidel
Technical Report, 2012
Abstract
In this report we present a novel method for detecting partial symmetries
in very large point clouds of 3D city scans. Unlike previous work, which
was limited to data sets of a few hundred megabytes maximum, our method
scales to very large scenes. We map the detection problem to a nearestneighbor
search in a low-dimensional feature space, followed by a cascade of
tests for geometric clustering of potential matches. Our algorithm robustly
handles noisy real-world scanner data, obtaining a recognition performance
comparable to state-of-the-art methods. In practice, it scales linearly with
the scene size and achieves a high absolute throughput, processing half a
terabyte of raw scanner data over night on a dual socket commodity PC.
Export
BibTeX
@techreport{KerberBokelohWandSeidel2012,
TITLE = {Symmetry Detection in Large Scale City Scans},
AUTHOR = {Kerber, Jens and Wand, Michael and Bokeloh, Martin and Seidel, Hans-Peter},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-4-001},
YEAR = {2012},
ABSTRACT = {In this report we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which was limited to data sets of a few hundred megabytes maximum, our method scales to very large scenes. We map the detection problem to a nearestneighbor search in a low-dimensional feature space, followed by a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to state-of-the-art methods. In practice, it scales linearly with the scene size and achieves a high absolute throughput, processing half a terabyte of raw scanner data over night on a dual socket commodity PC.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kerber, Jens
%A Wand, Michael
%A Bokeloh, Martin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Symmetry Detection in Large Scale City Scans :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-0427-4
%D 2012
%P 32 p.
%X In this report we present a novel method for detecting partial symmetries
in very large point clouds of 3D city scans. Unlike previous work, which
was limited to data sets of a few hundred megabytes maximum, our method
scales to very large scenes. We map the detection problem to a nearestneighbor
search in a low-dimensional feature space, followed by a cascade of
tests for geometric clustering of potential matches. Our algorithm robustly
handles noisy real-world scanner data, obtaining a recognition performance
comparable to state-of-the-art methods. In practice, it scales linearly with
the scene size and achieves a high absolute throughput, processing half a
terabyte of raw scanner data over night on a dual socket commodity PC.
%B Research Report
%@ false
MDL4BMF: Minimum Description Length for Boolean Matrix Factorization
P. Miettinen and J. Vreeken
Technical Report, 2012
P. Miettinen and J. Vreeken
Technical Report, 2012
Abstract
Matrix factorizations—where a given data matrix is approximated by a prod- uct of two or more factor matrices—are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the ‘model order selection problem’ of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices.
Boolean matrix factorization (BMF)—where data, factors, and matrix product are Boolean—has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate.
We formulate the description length function for BMF in general—making it applicable for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.
Export
BibTeX
@techreport{MiettinenVreeken,
TITLE = {{MDL4BMF}: Minimum Description Length for Boolean Matrix Factorization},
AUTHOR = {Miettinen, Pauli and Vreeken, Jilles},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {Matrix factorizations---where a given data matrix is approximated by a prod- uct of two or more factor matrices---are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the {\textquoteleft}model order selection problem{\textquoteright} of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices. Boolean matrix factorization (BMF)---where data, factors, and matrix product are Boolean---has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate. We formulate the description length function for BMF in general---making it applicable for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Miettinen, Pauli
%A Vreeken, Jilles
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T MDL4BMF: Minimum Description Length for Boolean Matrix Factorization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-0422-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%P 48 p.
%X Matrix factorizations—where a given data matrix is approximated by a prod- uct of two or more factor matrices—are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the ‘model order selection problem’ of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices.
Boolean matrix factorization (BMF)—where data, factors, and matrix product are Boolean—has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate.
We formulate the description length function for BMF in general—making it applicable for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.
%B Research Report
%@ false
Labelled Superposition for PLTL
M. Suda and C. Weidenbach
Technical Report, 2012
M. Suda and C. Weidenbach
Technical Report, 2012
Abstract
This paper introduces a new decision procedure for PLTL based on labelled
superposition.
Its main idea is to treat temporal formulas as infinite sets of purely
propositional clauses over an extended signature. These infinite sets are then
represented by finite sets of labelled propositional clauses. The new
representation enables the replacement of the complex temporal resolution
rule, suggested by existing resolution calculi for PLTL, by a fine grained
repetition check of finitely saturated labelled clause sets followed by a
simple inference. The completeness argument is based on the standard model
building idea from superposition. It inherently justifies ordering
restrictions, redundancy elimination and effective partial model building. The
latter can be directly used to effectively generate counterexamples of
non-valid PLTL conjectures out of saturated labelled clause sets in a
straightforward way.
Export
BibTeX
@techreport{SudaWeidenbachLPAR2012,
TITLE = {Labelled Superposition for {PLTL}},
AUTHOR = {Suda, Martin and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-RG1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {This paper introduces a new decision procedure for PLTL based on labelled superposition. Its main idea is to treat temporal formulas as infinite sets of purely propositional clauses over an extended signature. These infinite sets are then represented by finite sets of labelled propositional clauses. The new representation enables the replacement of the complex temporal resolution rule, suggested by existing resolution calculi for PLTL, by a fine grained repetition check of finitely saturated labelled clause sets followed by a simple inference. The completeness argument is based on the standard model building idea from superposition. It inherently justifies ordering restrictions, redundancy elimination and effective partial model building. The latter can be directly used to effectively generate counterexamples of non-valid PLTL conjectures out of saturated labelled clause sets in a straightforward way.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Suda, Martin
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Labelled Superposition for PLTL :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03DC-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%P 42 p.
%X This paper introduces a new decision procedure for PLTL based on labelled
superposition.
Its main idea is to treat temporal formulas as infinite sets of purely
propositional clauses over an extended signature. These infinite sets are then
represented by finite sets of labelled propositional clauses. The new
representation enables the replacement of the complex temporal resolution
rule, suggested by existing resolution calculi for PLTL, by a fine grained
repetition check of finitely saturated labelled clause sets followed by a
simple inference. The completeness argument is based on the standard model
building idea from superposition. It inherently justifies ordering
restrictions, redundancy elimination and effective partial model building. The
latter can be directly used to effectively generate counterexamples of
non-valid PLTL conjectures out of saturated labelled clause sets in a
straightforward way.
%B Research Reports
%@ false
2011
Temporal Index Sharding for Space-time Efficiency in Archive Search
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2011
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2011
Abstract
Time-travel queries that couple temporal constraints with keyword
queries are useful in searching large-scale archives of time-evolving
content such as the Web, document collections, wikis, and so
on. Typical approaches for efficient evaluation of these queries
involve \emph{slicing} along the time-axis either the entire
collection~\cite{253349}, or individual index
lists~\cite{kberberi:sigir2007}. Both these methods are not
satisfactory since they sacrifice compactness of index for processing
efficiency making them either too big or, otherwise, too slow.
We present a novel index organization scheme that \emph{shards} the
index with \emph{zero increase in index size}, still minimizing the
cost of reading index index entries during query processing. Based on
the optimal sharding thus obtained, we develop practically efficient
sharding that takes into account the different costs of random and
sequential accesses. Our algorithm merges shards from the optimal
solution carefully to allow for few extra sequential accesses while
gaining significantly by reducing the random accesses. Finally, we
empirically establish the effectiveness of our novel sharding scheme
via detailed experiments over the edit history of the English version
of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of
the UK governmental web sites ($\approx$ 400 GB). Our results
demonstrate the feasibility of faster time-travel query processing
with no space overhead.
Export
BibTeX
@techreport{Bedathur2011,
TITLE = {Temporal Index Sharding for Space-time Efficiency in Archive Search},
AUTHOR = {Anand, Avishek and Bedathur, Srikanta and Berberich, Klaus and Schenkel, Ralf},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-5-001},
INSTITUTION = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {Time-travel queries that couple temporal constraints with keyword queries are useful in searching large-scale archives of time-evolving content such as the Web, document collections, wikis, and so on. Typical approaches for efficient evaluation of these queries involve \emph{slicing} along the time-axis either the entire collection~\cite{253349}, or individual index lists~\cite{kberberi:sigir2007}. Both these methods are not satisfactory since they sacrifice compactness of index for processing efficiency making them either too big or, otherwise, too slow. We present a novel index organization scheme that \emph{shards} the index with \emph{zero increase in index size}, still minimizing the cost of reading index index entries during query processing. Based on the optimal sharding thus obtained, we develop practically efficient sharding that takes into account the different costs of random and sequential accesses. Our algorithm merges shards from the optimal solution carefully to allow for few extra sequential accesses while gaining significantly by reducing the random accesses. Finally, we empirically establish the effectiveness of our novel sharding scheme via detailed experiments over the edit history of the English version of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of the UK governmental web sites ($\approx$ 400 GB). Our results demonstrate the feasibility of faster time-travel query processing with no space overhead.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Anand, Avishek
%A Bedathur, Srikanta
%A Berberich, Klaus
%A Schenkel, Ralf
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Temporal Index Sharding for Space-time Efficiency in Archive Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0025-7311-D
%Y Universität des Saarlandes
%C Saarbrücken
%D 2011
%X Time-travel queries that couple temporal constraints with keyword
queries are useful in searching large-scale archives of time-evolving
content such as the Web, document collections, wikis, and so
on. Typical approaches for efficient evaluation of these queries
involve \emph{slicing} along the time-axis either the entire
collection~\cite{253349}, or individual index
lists~\cite{kberberi:sigir2007}. Both these methods are not
satisfactory since they sacrifice compactness of index for processing
efficiency making them either too big or, otherwise, too slow.
We present a novel index organization scheme that \emph{shards} the
index with \emph{zero increase in index size}, still minimizing the
cost of reading index index entries during query processing. Based on
the optimal sharding thus obtained, we develop practically efficient
sharding that takes into account the different costs of random and
sequential accesses. Our algorithm merges shards from the optimal
solution carefully to allow for few extra sequential accesses while
gaining significantly by reducing the random accesses. Finally, we
empirically establish the effectiveness of our novel sharding scheme
via detailed experiments over the edit history of the English version
of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of
the UK governmental web sites ($\approx$ 400 GB). Our results
demonstrate the feasibility of faster time-travel query processing
with no space overhead.
%B Research Report
%@ false
A Morphable Part Model for Shape Manipulation
A. Berner, O. Burghard, M. Wand, N. Mitra, R. Klein and H.-P. Seidel
Technical Report, 2011
A. Berner, O. Burghard, M. Wand, N. Mitra, R. Klein and H.-P. Seidel
Technical Report, 2011
Abstract
We introduce morphable part models for smart shape manipulation using an assembly
of deformable parts with appropriate boundary conditions. In an analysis
phase, we characterize the continuous allowable variations both for the individual
parts and their interconnections using Gaussian shape models with low
rank covariance. The discrete aspect of how parts can be assembled is captured
using a shape grammar. The parts and their interconnection rules are learned
semi-automatically from symmetries within a single object or from semantically
corresponding parts across a larger set of example models. The learned discrete
and continuous structure is encoded as a graph. In the interaction phase, we
obtain an interactive yet intuitive shape deformation framework producing realistic
deformations on classes of objects that are difficult to edit using existing
structure-aware deformation techniques. Unlike previous techniques, our method
uses self-similarities from a single model as training input and allows the user
to reassemble the identified parts in new configurations, thus exploiting both the
discrete and continuous learned variations while ensuring appropriate boundary
conditions across part boundaries.
Export
BibTeX
@techreport{BernerBurghardWandMitraKleinSeidel2011,
TITLE = {A Morphable Part Model for Shape Manipulation},
AUTHOR = {Berner, Alexander and Burghard, Oliver and Wand, Michael and Mitra, Niloy and Klein, Reinhard and Seidel, Hans-Peter},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {We introduce morphable part models for smart shape manipulation using an assembly of deformable parts with appropriate boundary conditions. In an analysis phase, we characterize the continuous allowable variations both for the individual parts and their interconnections using Gaussian shape models with low rank covariance. The discrete aspect of how parts can be assembled is captured using a shape grammar. The parts and their interconnection rules are learned semi-automatically from symmetries within a single object or from semantically corresponding parts across a larger set of example models. The learned discrete and continuous structure is encoded as a graph. In the interaction phase, we obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult to edit using existing structure-aware deformation techniques. Unlike previous techniques, our method uses self-similarities from a single model as training input and allows the user to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berner, Alexander
%A Burghard, Oliver
%A Wand, Michael
%A Mitra, Niloy
%A Klein, Reinhard
%A Seidel, Hans-Peter
%+ External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T A Morphable Part Model for Shape Manipulation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6972-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 33 p.
%X We introduce morphable part models for smart shape manipulation using an assembly
of deformable parts with appropriate boundary conditions. In an analysis
phase, we characterize the continuous allowable variations both for the individual
parts and their interconnections using Gaussian shape models with low
rank covariance. The discrete aspect of how parts can be assembled is captured
using a shape grammar. The parts and their interconnection rules are learned
semi-automatically from symmetries within a single object or from semantically
corresponding parts across a larger set of example models. The learned discrete
and continuous structure is encoded as a graph. In the interaction phase, we
obtain an interactive yet intuitive shape deformation framework producing realistic
deformations on classes of objects that are difficult to edit using existing
structure-aware deformation techniques. Unlike previous techniques, our method
uses self-similarities from a single model as training input and allows the user
to reassemble the identified parts in new configurations, thus exploiting both the
discrete and continuous learned variations while ensuring appropriate boundary
conditions across part boundaries.
%B Research Report
%@ false
PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata
W. Damm, C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2011
W. Damm, C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2011
Abstract
This paper identifies an industrially relevant class of
linear hybrid automata (LHA) called reasonable LHA for
which parametric verification of convex safety properties
with exhaustive entry states can be verified in polynomial
time and time-bounded reachability can be decided
in nondeterministic polynomial time for non-parametric
verification and in exponential time for
parametric verification. Properties with exhaustive entry
states are restricted to runs originating in
a (specified) inner envelope of some mode-invariant.
Deciding whether an LHA is reasonable is
shown to be decidable in polynomial time.
Export
BibTeX
@techreport{Damm-Ihlemann-Sofronie-Stokkermans2011-report,
TITLE = {{PTIME} Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata},
AUTHOR = {Damm, Werner and Ihlemann, Carsten and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR70},
LOCALID = {Local-ID: C125716C0050FB51-DEB90D4E9EAE27B7C1257855003AF8EE-Damm-Ihlemann-Sofronie-Stokkermans2011-report},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {This paper identifies an industrially relevant class of linear hybrid automata (LHA) called reasonable LHA for which parametric verification of convex safety properties with exhaustive entry states can be verified in polynomial time and time-bounded reachability can be decided in nondeterministic polynomial time for non-parametric verification and in exponential time for parametric verification. Properties with exhaustive entry states are restricted to runs originating in a (specified) inner envelope of some mode-invariant. Deciding whether an LHA is reasonable is shown to be decidable in polynomial time.},
TYPE = {AVACS Technical Report},
VOLUME = {70},
}
Endnote
%0 Report
%A Damm, Werner
%A Ihlemann, Carsten
%A Sofronie-Stokkermans, Viorica
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0010-14F5-F
%F EDOC: 619013
%F OTHER: Local-ID: C125716C0050FB51-DEB90D4E9EAE27B7C1257855003AF8EE-Damm-Ihlemann-Sofronie-Stokkermans2011-report
%Y SFB/TR 14 AVACS
%D 2011
%P 31 p.
%X This paper identifies an industrially relevant class of
linear hybrid automata (LHA) called reasonable LHA for
which parametric verification of convex safety properties
with exhaustive entry states can be verified in polynomial
time and time-bounded reachability can be decided
in nondeterministic polynomial time for non-parametric
verification and in exponential time for
parametric verification. Properties with exhaustive entry
states are restricted to runs originating in
a (specified) inner envelope of some mode-invariant.
Deciding whether an LHA is reasonable is
shown to be decidable in polynomial time.
%B AVACS Technical Report
%N 70
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_070.pdf
Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems
W. Damm, S. Disch, W. Hagemann, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2011
W. Damm, S. Disch, W. Hagemann, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2011
Abstract
We describe an approach to integrate incremental ow pipe computation into a
fully symbolic backward model checker for hybrid systems. Our method combines
the advantages of symbolic state set representation, such as the ability to
deal with large numbers of boolean variables, with an effcient way to handle
continuous ows dened by linear differential equations, possibly including
bounded disturbances.
Export
BibTeX
@techreport{DammDierksHagemannEtAl2011,
TITLE = {Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems},
AUTHOR = {Damm, Werner and Disch, Stefan and Hagemann, Willem and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris},
EDITOR = {Becker, Bernd and Damm, Werner and Finkbeiner, Bernd and Fr{\"a}nzle, Martin and Olderog, Ernst-R{\"u}diger and Podelski, Andreas},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR76},
INSTITUTION = {SFB/TR 14 AVACS},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {We describe an approach to integrate incremental ow pipe computation into a fully symbolic backward model checker for hybrid systems. Our method combines the advantages of symbolic state set representation, such as the ability to deal with large numbers of boolean variables, with an effcient way to handle continuous ows dened by linear differential equations, possibly including bounded disturbances.},
TYPE = {AVACS Technical Report},
VOLUME = {76},
}
Endnote
%0 Report
%A Damm, Werner
%A Disch, Stefan
%A Hagemann, Willem
%A Scholl, Christoph
%A Waldmann, Uwe
%A Wirtz, Boris
%E Becker, Bernd
%E Damm, Werner
%E Finkbeiner, Bernd
%E Fränzle, Martin
%E Olderog, Ernst-Rüdiger
%E Podelski, Andreas
%+ External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
External Organizations
External Organizations
External Organizations
External Organizations
%T Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-001A-150E-7
%Y SFB/TR 14 AVACS
%C Saarbrücken
%D 2011
%X We describe an approach to integrate incremental ow pipe computation into a
fully symbolic backward model checker for hybrid systems. Our method combines
the advantages of symbolic state set representation, such as the ability to
deal with large numbers of boolean variables, with an effcient way to handle
continuous ows dened by linear differential equations, possibly including
bounded disturbances.
%B AVACS Technical Report
%N 76
%@ false
Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent
R. Gemulla, P. J. Haas, E. Nijkamp and Y. Sismanis
Technical Report, 2011
R. Gemulla, P. J. Haas, E. Nijkamp and Y. Sismanis
Technical Report, 2011
Abstract
As Web 2.0 and enterprise-cloud applications have proliferated, data mining
algorithms increasingly need to be (re)designed to handle web-scale
datasets. For this reason, low-rank matrix factorization has received a lot
of attention in recent years, since it is fundamental to a variety of mining
tasks, such as topic detection and collaborative filtering, that are
increasingly being applied to massive datasets. We provide a novel algorithm
to approximately factor large matrices with millions of rows, millions of
columns, and billions of nonzero elements. Our approach rests on stochastic
gradient descent (SGD), an iterative stochastic optimization algorithm; the
idea is to exploit the special structure of the matrix factorization problem
to develop a new ``stratified'' SGD variant that can be fully distributed
and run on web-scale datasets using, e.g., MapReduce. The resulting
distributed SGD factorization algorithm, called DSGD, provides good speed-up
and handles a wide variety of matrix factorizations. We establish
convergence properties of DSGD using results from stochastic approximation
theory and regenerative process theory, and also describe the practical
techniques used to optimize performance in our DSGD
implementation. Experiments suggest that DSGD converges significantly faster
and has better scalability properties than alternative algorithms.
Export
BibTeX
@techreport{gemulla11,
TITLE = {Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent},
AUTHOR = {Gemulla, Rainer and Haas, Peter J. and Nijkamp, Erik and Sismanis, Yannis},
LANGUAGE = {eng},
URL = {http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf},
LOCALID = {Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11},
INSTITUTION = {IBM Research Division},
ADDRESS = {San Jose, CA},
YEAR = {2011},
ABSTRACT = {As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle web-scale datasets. For this reason, low-rank matrix factorization has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and collaborative filtering, that are increasingly being applied to massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix factorization problem to develop a new ``stratified'' SGD variant that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization algorithm, called DSGD, provides good speed-up and handles a wide variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and regenerative process theory, and also describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.},
TYPE = {IBM Research Report},
VOLUME = {RJ10481},
}
Endnote
%0 Report
%A Gemulla, Rainer
%A Haas, Peter J.
%A Nijkamp, Erik
%A Sismanis, Yannis
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
%T Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0010-147F-E
%F EDOC: 618949
%U http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf
%F OTHER: Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11
%Y IBM Research Division
%C San Jose, CA
%D 2011
%X As Web 2.0 and enterprise-cloud applications have proliferated, data mining
algorithms increasingly need to be (re)designed to handle web-scale
datasets. For this reason, low-rank matrix factorization has received a lot
of attention in recent years, since it is fundamental to a variety of mining
tasks, such as topic detection and collaborative filtering, that are
increasingly being applied to massive datasets. We provide a novel algorithm
to approximately factor large matrices with millions of rows, millions of
columns, and billions of nonzero elements. Our approach rests on stochastic
gradient descent (SGD), an iterative stochastic optimization algorithm; the
idea is to exploit the special structure of the matrix factorization problem
to develop a new ``stratified'' SGD variant that can be fully distributed
and run on web-scale datasets using, e.g., MapReduce. The resulting
distributed SGD factorization algorithm, called DSGD, provides good speed-up
and handles a wide variety of matrix factorizations. We establish
convergence properties of DSGD using results from stochastic approximation
theory and regenerative process theory, and also describe the practical
techniques used to optimize performance in our DSGD
implementation. Experiments suggest that DSGD converges significantly faster
and has better scalability properties than alternative algorithms.
%B IBM Research Report
%N RJ10481
How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes
M. Granados, J. Tompkin, K. Kim, O. Grau, J. Kautz and C. Theobalt
Technical Report, 2011
M. Granados, J. Tompkin, K. Kim, O. Grau, J. Kautz and C. Theobalt
Technical Report, 2011
Abstract
Removing dynamic objects from videos is an extremely challenging problem that
even visual effects professionals often solve with time-consuming manual
frame-by-frame editing.
We propose a new approach to video completion that can deal with complex scenes
containing dynamic background and non-periodical moving objects.
We build upon the idea that the spatio-temporal hole left by a removed object
can be filled with data available on other regions of the video where the
occluded objects were visible.
Video completion is performed by solving a large combinatorial problem that
searches for an optimal pattern of pixel offsets from occluded to unoccluded
regions.
Our contribution includes an energy functional that generalizes well over
different scenes with stable parameters, and that has the desirable convergence
properties for a graph-cut-based optimization.
We provide an interface to guide the completion process that both reduces
computation time and allows for efficient correction of small errors in the
result.
We demonstrate that our approach can effectively complete complex,
high-resolution occlusions that are greater in difficulty than what existing
methods have shown.
Export
BibTeX
@techreport{Granados2011TR,
TITLE = {How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes},
AUTHOR = {Granados, Miguel and Tompkin, James and Kim, Kwang and Grau, O. and Kautz, Jan and Theobalt, Christian},
LANGUAGE = {eng},
NUMBER = {MPI-I-2011-4-001},
INSTITUTION = {MPI f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
ABSTRACT = {Removing dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Granados, Miguel
%A Tompkin, James
%A Kim, Kwang
%A Grau, O.
%A Kautz, Jan
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0010-13C5-3
%F EDOC: 618872
%Y MPI für Informatik
%C Saarbrücken
%D 2011
%P 35 p.
%X Removing dynamic objects from videos is an extremely challenging problem that
even visual effects professionals often solve with time-consuming manual
frame-by-frame editing.
We propose a new approach to video completion that can deal with complex scenes
containing dynamic background and non-periodical moving objects.
We build upon the idea that the spatio-temporal hole left by a removed object
can be filled with data available on other regions of the video where the
occluded objects were visible.
Video completion is performed by solving a large combinatorial problem that
searches for an optimal pattern of pixel offsets from occluded to unoccluded
regions.
Our contribution includes an energy functional that generalizes well over
different scenes with stable parameters, and that has the desirable convergence
properties for a graph-cut-based optimization.
We provide an interface to guide the completion process that both reduces
computation time and allows for efficient correction of small errors in the
result.
We demonstrate that our approach can effectively complete complex,
high-resolution occlusions that are greater in difficulty than what existing
methods have shown.
%B Research Report
Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution
K. I. Kim, Y. Kwon, J. H. Kim and C. Theobalt
Technical Report, 2011
K. I. Kim, Y. Kwon, J. H. Kim and C. Theobalt
Technical Report, 2011
Abstract
Many computer vision and computational photography applications
essentially solve an image enhancement problem. The image has been
deteriorated by a specific noise process, such as aberrations from camera
optics and compression artifacts, that we would like to remove. We
describe a framework for learning-based image enhancement. At the core of
our algorithm lies a generic regularization framework that comprises a
prior on natural images, as well as an application-specific conditional
model based on Gaussian processes. In contrast to prior learning-based
approaches, our algorithm can instantly learn task-specific degradation
models from sample images which enables users to easily adapt the
algorithm to a specific problem and data set of interest. This is
facilitated by our efficient approximation scheme of large-scale Gaussian
processes. We demonstrate the efficiency and effectiveness of our approach
by applying it to example enhancement applications including single-image
super-resolution, as well as artifact removal in JPEG- and JPEG
2000-encoded images.
Export
BibTeX
@techreport{KimKwonKimTheobalt2011,
TITLE = {Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution},
AUTHOR = {Kim, Kwang In and Kwon, Younghee and Kim, Jin Hyung and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
ABSTRACT = {Many computer vision and computational photography applications essentially solve an image enhancement problem. The image has been deteriorated by a specific noise process, such as aberrations from camera optics and compression artifacts, that we would like to remove. We describe a framework for learning-based image enhancement. At the core of our algorithm lies a generic regularization framework that comprises a prior on natural images, as well as an application-specific conditional model based on Gaussian processes. In contrast to prior learning-based approaches, our algorithm can instantly learn task-specific degradation models from sample images which enables users to easily adapt the algorithm to a specific problem and data set of interest. This is facilitated by our efficient approximation scheme of large-scale Gaussian processes. We demonstrate the efficiency and effectiveness of our approach by applying it to example enhancement applications including single-image super-resolution, as well as artifact removal in JPEG- and JPEG 2000-encoded images.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kim, Kwang In
%A Kwon, Younghee
%A Kim, Jin Hyung
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Efficient Learning-based Image Enhancement : Application to
Compression Artifact Removal and Super-resolution :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-13A3-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%X Many computer vision and computational photography applications
essentially solve an image enhancement problem. The image has been
deteriorated by a specific noise process, such as aberrations from camera
optics and compression artifacts, that we would like to remove. We
describe a framework for learning-based image enhancement. At the core of
our algorithm lies a generic regularization framework that comprises a
prior on natural images, as well as an application-specific conditional
model based on Gaussian processes. In contrast to prior learning-based
approaches, our algorithm can instantly learn task-specific degradation
models from sample images which enables users to easily adapt the
algorithm to a specific problem and data set of interest. This is
facilitated by our efficient approximation scheme of large-scale Gaussian
processes. We demonstrate the efficiency and effectiveness of our approach
by applying it to example enhancement applications including single-image
super-resolution, as well as artifact removal in JPEG- and JPEG
2000-encoded images.
%B Research Report
%@ false
Towards Verification of the Pastry Protocol using TLA+
T. Lu, S. Merz and C. Weidenbach
Technical Report, 2011
T. Lu, S. Merz and C. Weidenbach
Technical Report, 2011
Abstract
Pastry is an algorithm that provides a scalable distributed hash table over
an underlying P2P network. Several implementations of Pastry are available
and have been applied in practice, but no attempt has so far been made to
formally describe the algorithm or to verify its properties. Since Pastry combines
rather complex data structures, asynchronous communication, concurrency,
resilience to churn and fault tolerance, it makes an interesting target
for verication. We have modeled Pastry's core routing algorithms and communication
protocol in the specication language TLA+. In order to validate
the model and to search for bugs we employed the TLA+ model checker tlc
to analyze several qualitative properties. We obtained non-trivial insights in
the behavior of Pastry through the model checking analysis. Furthermore,
we started to verify Pastry using the very same model and the interactive
theorem prover tlaps for TLA+. A rst result is the reduction of global
Pastry correctness properties to invariants of the underlying data structures.
Export
BibTeX
@techreport{LuMerzWeidenbach2011,
TITLE = {Towards Verification of the {Pastry} Protocol using {TLA+}},
AUTHOR = {Lu, Tianxiang and Merz, Stephan and Weidenbach, Christoph},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-RG1-002},
NUMBER = {MPI-I-2011-RG1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {Pastry is an algorithm that provides a scalable distributed hash table over an underlying P2P network. Several implementations of Pastry are available and have been applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience to churn and fault tolerance, it makes an interesting target for verication. We have modeled Pastry's core routing algorithms and communication protocol in the specication language TLA+. In order to validate the model and to search for bugs we employed the TLA+ model checker tlc to analyze several qualitative properties. We obtained non-trivial insights in the behavior of Pastry through the model checking analysis. Furthermore, we started to verify Pastry using the very same model and the interactive theorem prover tlaps for TLA+. A rst result is the reduction of global Pastry correctness properties to invariants of the underlying data structures.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Lu, Tianxiang
%A Merz, Stephan
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Towards Verification of the Pastry Protocol using TLA+ :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6975-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-RG1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 51 p.
%X Pastry is an algorithm that provides a scalable distributed hash table over
an underlying P2P network. Several implementations of Pastry are available
and have been applied in practice, but no attempt has so far been made to
formally describe the algorithm or to verify its properties. Since Pastry combines
rather complex data structures, asynchronous communication, concurrency,
resilience to churn and fault tolerance, it makes an interesting target
for verication. We have modeled Pastry's core routing algorithms and communication
protocol in the specication language TLA+. In order to validate
the model and to search for bugs we employed the TLA+ model checker tlc
to analyze several qualitative properties. We obtained non-trivial insights in
the behavior of Pastry through the model checking analysis. Furthermore,
we started to verify Pastry using the very same model and the interactive
theorem prover tlaps for TLA+. A rst result is the reduction of global
Pastry correctness properties to invariants of the underlying data structures.
%B Research Report
Finding Images of Rare and Ambiguous Entities
B. Taneva, M. Kacimi El Hassani and G. Weikum
Technical Report, 2011
B. Taneva, M. Kacimi El Hassani and G. Weikum
Technical Report, 2011
Export
BibTeX
@techreport{TanevaKacimiWeikum2011,
TITLE = {Finding Images of Rare and Ambiguous Entities},
AUTHOR = {Taneva, Bilyana and Kacimi El Hassani, M. and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-5-002},
NUMBER = {MPI-I-2011-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Taneva, Bilyana
%A Kacimi El Hassani, M.
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Finding Images of Rare and Ambiguous Entities :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6581-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 30 p.
%B Research Report
Videoscapes: Exploring Unstructured Video Collections
J. Tompkin, K. I. Kim, J. Kautz and C. Theobalt
Technical Report, 2011
J. Tompkin, K. I. Kim, J. Kautz and C. Theobalt
Technical Report, 2011
Export
BibTeX
@techreport{TompkinKimKautzTheobalt2011,
TITLE = {Videoscapes: Exploring Unstructured Video Collections},
AUTHOR = {Tompkin, James and Kim, Kwang In and Kautz, Jan and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Tompkin, James
%A Kim, Kwang In
%A Kautz, Jan
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Videoscapes: Exploring Unstructured Video Collections :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-F76C-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 32 p.
%B Research Report
%@ false
2010
A New Combinatorial Approach to Parametric Path Analysis
E. Althaus, S. Altmeyer and R. Naujoks
Technical Report, 2010
E. Althaus, S. Altmeyer and R. Naujoks
Technical Report, 2010
Abstract
Hard real-time systems require tasks to finish in time. To guarantee the
timeliness of such a system, static timing analyses derive upper bounds on the
\emph{worst-case execution time} of tasks. There are two types of timing
analyses: numeric and parametric ones. A numeric analysis derives a numeric
timing bound and, to this end, assumes all information such as loop bounds to
be given a priori.
If these bounds are unknown during analysis time, a parametric analysis can
compute a timing formula parametric in these variables.
A performance bottleneck of timing analyses, numeric and especially parametric,
can be the so-called path analysis, which determines the path in the analyzed
task with the longest execution time bound.
In this paper, we present a new approach to the path analysis.
This approach exploits the rather regular structure of software for hard
real-time and safety-critical systems.
As we show in the evaluation of this paper, we strongly improve upon former
techniques in terms of precision and runtime in the parametric case. Even in
the numeric case, our approach matches up to state-of-the-art techniques and
may be an alternative to commercial tools employed for path analysis.
Export
BibTeX
@techreport{Naujoks10a,
TITLE = {A New Combinatorial Approach to Parametric Path Analysis},
AUTHOR = {Althaus, Ernst and Altmeyer, Sebastian and Naujoks, Rouven},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR58},
LOCALID = {Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis.},
TYPE = {AVACS Technical Report},
VOLUME = {58},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Altmeyer, Sebastian
%A Naujoks, Rouven
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A New Combinatorial Approach to Parametric Path Analysis :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-15F7-8
%F EDOC: 536763
%F OTHER: Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a
%Y SFB/TR 14 AVACS
%D 2010
%P 33 p.
%X Hard real-time systems require tasks to finish in time. To guarantee the
timeliness of such a system, static timing analyses derive upper bounds on the
\emph{worst-case execution time} of tasks. There are two types of timing
analyses: numeric and parametric ones. A numeric analysis derives a numeric
timing bound and, to this end, assumes all information such as loop bounds to
be given a priori.
If these bounds are unknown during analysis time, a parametric analysis can
compute a timing formula parametric in these variables.
A performance bottleneck of timing analyses, numeric and especially parametric,
can be the so-called path analysis, which determines the path in the analyzed
task with the longest execution time bound.
In this paper, we present a new approach to the path analysis.
This approach exploits the rather regular structure of software for hard
real-time and safety-critical systems.
As we show in the evaluation of this paper, we strongly improve upon former
techniques in terms of precision and runtime in the parametric case. Even in
the numeric case, our approach matches up to state-of-the-art techniques and
may be an alternative to commercial tools employed for path analysis.
%B AVACS Technical Report
%N 58
%@ false
%U http://www.avacs.org/Publikationen/Open/avacs_technical_report_058.pdf
Efficient Temporal Keyword Queries over Versioned Text
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2010
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2010
Export
BibTeX
@techreport{AnandBedathurBerberichSchenkel2010,
TITLE = {Efficient Temporal Keyword Queries over Versioned Text},
AUTHOR = {Anand, Avishek and Bedathur, Srikanta and Berberich, Klaus and Schenkel, Ralf},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-003},
NUMBER = {MPI-I-2010-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Anand, Avishek
%A Bedathur, Srikanta
%A Berberich, Klaus
%A Schenkel, Ralf
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Efficient Temporal Keyword Queries over Versioned Text :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-65A0-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 39 p.
%B Research Report
A Generic Algebraic Kernel for Non-linear Geometric Applications
E. Berberich, M. Hemmer and M. Kerber
Technical Report, 2010
E. Berberich, M. Hemmer and M. Kerber
Technical Report, 2010
Export
BibTeX
@techreport{bhk-ak2-inria-2010,
TITLE = {A Generic Algebraic Kernel for Non-linear Geometric Applications},
AUTHOR = {Berberich, Eric and Hemmer, Michael and Kerber, Michael},
LANGUAGE = {eng},
URL = {http://hal.inria.fr/inria-00480031/fr/},
NUMBER = {7274},
LOCALID = {Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis, France},
YEAR = {2010},
DATE = {2010},
TYPE = {Rapport de recherche / INRIA},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%A Kerber, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Generic Algebraic Kernel for Non-linear Geometric Applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-15EC-2
%F EDOC: 536754
%U http://hal.inria.fr/inria-00480031/fr/
%F OTHER: Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010
%Y INRIA
%C Sophia Antipolis, France
%D 2010
%P 20 p.
%B Rapport de recherche / INRIA
A Language Modeling Approach for Temporal Information Needs
K. Berberich, S. Bedathur, O. Alonso and G. Weikum
Technical Report, 2010
K. Berberich, S. Bedathur, O. Alonso and G. Weikum
Technical Report, 2010
Abstract
This work addresses information needs that have a temporal
dimension conveyed by a temporal expression in the
user's query. Temporal expressions such as \textsf{``in the 1990s''}
are
frequent, easily extractable, but not leveraged by existing
retrieval models. One challenge when dealing with them is their
inherent uncertainty. It is often unclear which exact time interval
a temporal expression refers to.
We integrate temporal expressions into a language modeling approach,
thus making them first-class citizens of the retrieval model and
considering their inherent uncertainty. Experiments on the New York
Times Annotated Corpus using Amazon Mechanical Turk to collect
queries and obtain relevance assessments demonstrate that
our approach yields substantial improvements in retrieval
effectiveness.
Export
BibTeX
@techreport{BerberichBedathurAlonsoWeikum2010,
TITLE = {A Language Modeling Approach for Temporal Information Needs},
AUTHOR = {Berberich, Klaus and Bedathur, Srikanta and Alonso, Omar and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-001},
NUMBER = {MPI-I-2010-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user's query. Temporal expressions such as \textsf{``in the 1990s''} are frequent, easily extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent uncertainty. It is often unclear which exact time interval a temporal expression refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens of the retrieval model and considering their inherent uncertainty. Experiments on the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments demonstrate that our approach yields substantial improvements in retrieval effectiveness.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Klaus
%A Bedathur, Srikanta
%A Alonso, Omar
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Language Modeling Approach for Temporal Information Needs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-65AB-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 29 p.
%X This work addresses information needs that have a temporal
dimension conveyed by a temporal expression in the
user's query. Temporal expressions such as \textsf{``in the 1990s''}
are
frequent, easily extractable, but not leveraged by existing
retrieval models. One challenge when dealing with them is their
inherent uncertainty. It is often unclear which exact time interval
a temporal expression refers to.
We integrate temporal expressions into a language modeling approach,
thus making them first-class citizens of the retrieval model and
considering their inherent uncertainty. Experiments on the New York
Times Annotated Corpus using Amazon Mechanical Turk to collect
queries and obtain relevance assessments demonstrate that
our approach yields substantial improvements in retrieval
effectiveness.
%B Research Report
Real-time Text Queries with Tunable Term Pair Indexes
A. Broschart and R. Schenkel
Technical Report, 2010
A. Broschart and R. Schenkel
Technical Report, 2010
Abstract
Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.
Export
BibTeX
@techreport{BroschartSchenkel2010,
TITLE = {Real-time Text Queries with Tunable Term Pair Indexes},
AUTHOR = {Broschart, Andreas and Schenkel, Ralf},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-006},
NUMBER = {MPI-I-2010-5-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Broschart, Andreas
%A Schenkel, Ralf
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Real-time Text Queries with Tunable Term Pair Indexes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-658C-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 41 p.
%X Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.
%B Research Report
LIVE: A Lineage-Supported Versioned DBMS
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2010
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2010
Export
BibTeX
@techreport{ilpubs-926,
TITLE = {{LIVE}: A Lineage-Supported Versioned {DBMS}},
AUTHOR = {Das Sarma, Anish and Theobald, Martin and Widom, Jennifer},
LANGUAGE = {eng},
URL = {http://ilpubs.stanford.edu:8090/926/},
NUMBER = {ILPUBS-926},
LOCALID = {Local-ID: C1256DBF005F876D-C48EC96138450196C12576B1003F58D3-ilpubs-926},
INSTITUTION = {Standford University},
ADDRESS = {Standford},
YEAR = {2010},
DATE = {2010},
TYPE = {Technical Report},
}
Endnote
%0 Report
%A Das Sarma, Anish
%A Theobald, Martin
%A Widom, Jennifer
%+ External Organizations
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T LIVE: A Lineage-Supported Versioned DBMS :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1512-A
%F EDOC: 536357
%U http://ilpubs.stanford.edu:8090/926/
%F OTHER: Local-ID: C1256DBF005F876D-C48EC96138450196C12576B1003F58D3-ilpubs-926
%Y Standford University
%C Standford
%D 2010
%P 13 p.
%B Technical Report
Query Relaxation for Entity-relationship Search
S. Elbassuoni, M. Ramanath and G. Weikum
Technical Report, 2010
S. Elbassuoni, M. Ramanath and G. Weikum
Technical Report, 2010
Export
BibTeX
@techreport{Elbassuoni-relax2010,
TITLE = {Query Relaxation for Entity-relationship Search},
AUTHOR = {Elbassuoni, Shady and Ramanath, Maya and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2010-5-008},
INSTITUTION = {Max-Planck Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Report},
}
Endnote
%0 Report
%A Elbassuoni, Shady
%A Ramanath, Maya
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Query Relaxation for Entity-relationship Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-B30B-6
%Y Max-Planck Institut für Informatik
%C Saarbrücken
%D 2010
%B Report
Automatic Verification of Parametric Specifications with Complex Topologies
J. Faber, C. Ihlemann, S. Jacobs and V. Sofronie-Stokkermans
Technical Report, 2010
J. Faber, C. Ihlemann, S. Jacobs and V. Sofronie-Stokkermans
Technical Report, 2010
Abstract
The focus of this paper is on reducing the complexity in
verification by exploiting modularity at various levels:
in specification, in verification, and structurally.
\begin{itemize}
\item For specifications, we use the modular language CSP-OZ-DC,
which allows us to decouple verification tasks concerning
data from those concerning durations.
\item At the verification level, we exploit modularity in
theorem proving for rich data structures and use this for
invariant checking.
\item At the structural level, we analyze possibilities
for modular verification of systems consisting of various
components which interact.
\end{itemize}
We illustrate these ideas by automatically verifying safety
properties of a case study from the European Train Control
System standard, which extends previous examples by comprising a
complex track topology with lists of track segments and trains
with different routes.
Export
BibTeX
@techreport{faber-ihlemann-jacobs-sofronie-2010-report,
TITLE = {Automatic Verification of Parametric Specifications with Complex Topologies},
AUTHOR = {Faber, Johannes and Ihlemann, Carsten and Jacobs, Swen and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR66},
LOCALID = {Local-ID: C125716C0050FB51-2E8AD7BA67FF4CB5C12577B4004D8EF8-faber-ihlemann-jacobs-sofronie-2010-report},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {The focus of this paper is on reducing the complexity in verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For specifications, we use the modular language CSP-OZ-DC, which allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit modularity in theorem proving for rich data structures and use this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of various components which interact. \end{itemize} We illustrate these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends previous examples by comprising a complex track topology with lists of track segments and trains with different routes.},
TYPE = {AVACS Technical Report},
VOLUME = {66},
}
Endnote
%0 Report
%A Faber, Johannes
%A Ihlemann, Carsten
%A Jacobs, Swen
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Automatic Verification of Parametric Specifications with Complex Topologies :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14A6-8
%F EDOC: 536341
%F OTHER: Local-ID: C125716C0050FB51-2E8AD7BA67FF4CB5C12577B4004D8EF8-faber-ihlemann-jacobs-sofronie-2010-report
%Y SFB/TR 14 AVACS
%D 2010
%P 40 p.
%X The focus of this paper is on reducing the complexity in
verification by exploiting modularity at various levels:
in specification, in verification, and structurally.
\begin{itemize}
\item For specifications, we use the modular language CSP-OZ-DC,
which allows us to decouple verification tasks concerning
data from those concerning durations.
\item At the verification level, we exploit modularity in
theorem proving for rich data structures and use this for
invariant checking.
\item At the structural level, we analyze possibilities
for modular verification of systems consisting of various
components which interact.
\end{itemize}
We illustrate these ideas by automatically verifying safety
properties of a case study from the European Train Control
System standard, which extends previous examples by comprising a
complex track topology with lists of track segments and trains
with different routes.
%B AVACS Technical Report
%N 66
%@ false
YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia
J. Hoffart, F. M. Suchanek, K. Berberich and G. Weikum
Technical Report, 2010
J. Hoffart, F. M. Suchanek, K. Berberich and G. Weikum
Technical Report, 2010
Abstract
We present YAGO2, an extension of the YAGO knowledge base, in which entities,
facts, and events are anchored in both time and space. YAGO2 is built
automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million
facts about 9.8 million entities. Human evaluation confirmed an accuracy of
95\% of the facts in YAGO2. In this paper, we present the extraction
methodology, the integration of the spatio-temporal dimension, and our
knowledge representation SPOTL, an extension of the original SPO-triple model
to time and space.
Export
BibTeX
@techreport{Hoffart2010,
TITLE = {{YAGO}2: A Spatially and Temporally Enhanced Knowledge Base from {Wikipedia}},
AUTHOR = {Hoffart, Johannes and Suchanek, Fabian M. and Berberich, Klaus and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-007},
NUMBER = {MPI-I-2010-5-007},
LOCALID = {Local-ID: C1256DBF005F876D-37A86CDFCE56B71DC125784800386E6A-Hoffart2010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Hoffart, Johannes
%A Suchanek, Fabian M.
%A Berberich, Klaus
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-155B-A
%F EDOC: 536412
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-007
%F OTHER: Local-ID: C1256DBF005F876D-37A86CDFCE56B71DC125784800386E6A-Hoffart2010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 55 p.
%X We present YAGO2, an extension of the YAGO knowledge base, in which entities,
facts, and events are anchored in both time and space. YAGO2 is built
automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million
facts about 9.8 million entities. Human evaluation confirmed an accuracy of
95\% of the facts in YAGO2. In this paper, we present the extraction
methodology, the integration of the spatio-temporal dimension, and our
knowledge representation SPOTL, an extension of the original SPO-triple model
to time and space.
%B Research Report
Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists
C.-C. Huang and T. Kavitha
Technical Report, 2010
C.-C. Huang and T. Kavitha
Technical Report, 2010
Abstract
We consider the problem of computing a maximum cardinality {\em popular}
matching in a bipartite
graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its
neighbors in a
strict order of preference. This is the same as an instance of the {\em
stable marriage}
problem with incomplete lists.
A matching $M^*$ is said to be popular if there is no matching $M$ such
that more vertices are better off in $M$ than in $M^*$.
\smallskip
Popular matchings have been extensively studied in the case of one-sided
preference lists, i.e.,
only vertices of $\A$ have preferences over their neighbors while
vertices in $\B$ have no
preferences; polynomial time algorithms
have been shown here to determine if a given instance admits a popular
matching
or not and if so, to compute one with maximum cardinality. It has very
recently
been shown that for two-sided preference lists, the problem of
determining if a given instance
admits a popular matching or not is NP-complete. However this hardness
result
assumes that preference lists have {\em ties}.
When preference lists are {\em strict}, it is easy to
show that popular matchings always exist since stable matchings always
exist and they are popular.
But the
complexity of computing a maximum cardinality popular matching was
unknown. In this paper
we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and
$m = |E|$.
Export
BibTeX
@techreport{HuangKavitha2010,
TITLE = {Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists},
AUTHOR = {Huang, Chien-Chung and Kavitha, Telikepalli},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001},
NUMBER = {MPI-I-2010-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|$.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Huang, Chien-Chung
%A Kavitha, Telikepalli
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6668-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 17 p.
%X We consider the problem of computing a maximum cardinality {\em popular}
matching in a bipartite
graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its
neighbors in a
strict order of preference. This is the same as an instance of the {\em
stable marriage}
problem with incomplete lists.
A matching $M^*$ is said to be popular if there is no matching $M$ such
that more vertices are better off in $M$ than in $M^*$.
\smallskip
Popular matchings have been extensively studied in the case of one-sided
preference lists, i.e.,
only vertices of $\A$ have preferences over their neighbors while
vertices in $\B$ have no
preferences; polynomial time algorithms
have been shown here to determine if a given instance admits a popular
matching
or not and if so, to compute one with maximum cardinality. It has very
recently
been shown that for two-sided preference lists, the problem of
determining if a given instance
admits a popular matching or not is NP-complete. However this hardness
result
assumes that preference lists have {\em ties}.
When preference lists are {\em strict}, it is easy to
show that popular matchings always exist since stable matchings always
exist and they are popular.
But the
complexity of computing a maximum cardinality popular matching was
unknown. In this paper
we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and
$m = |E|$.
%B Research Report
On Hierarchical Reasoning in Combinations of Theories
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010a
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010a
Abstract
In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
Export
BibTeX
@techreport{Ihlemann-Sofronie-Stokkermans-atr60-2010,
TITLE = {On Hierarchical Reasoning in Combinations of Theories},
AUTHOR = {Ihlemann, Carsten and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR60},
LOCALID = {Local-ID: C125716C0050FB51-8E77AFE123C76116C1257782003FEBDA-Ihlemann-Sofronie-Stokkermans-atr60-2010},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification.},
TYPE = {AVACS Technical Report},
VOLUME = {60},
}
Endnote
%0 Report
%A Ihlemann, Carsten
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T On Hierarchical Reasoning in Combinations of Theories :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14B7-2
%F EDOC: 536339
%F OTHER: Local-ID: C125716C0050FB51-8E77AFE123C76116C1257782003FEBDA-Ihlemann-Sofronie-Stokkermans-atr60-2010
%Y SFB/TR 14 AVACS
%D 2010
%P 26 p.
%X In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
%B AVACS Technical Report
%N 60
%@ false
%U http://www.avacs.org/Publikationen/Open/avacs_technical_report_060.pdf
System Description: H-PILoT (Version 1.9)
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010b
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010b
Abstract
This system description provides an overview of H-PILoT
(Hierarchical Proving by Instantiation in Local Theory
extensions), a program for hierarchical reasoning in
extensions of logical theories.
H-PILoT reduces deduction problems in the theory extension
to deduction problems in the base theory.
Specialized provers and standard SMT solvers can be used
for testing the satisfiability of the formulae obtained
after the reduction. For a certain type of theory extension
(namely for {\em local theory extensions}) this
hierarchical reduction is sound and complete and --
if the formulae obtained this way belong to a fragment
decidable in the base theory -- H-PILoT provides a decision
procedure for testing satisfiability of ground formulae,
and can also be used for model generation.
Export
BibTeX
@techreport{Ihlemann-Sofronie-Stokkermans-atr61-2010,
TITLE = {System Description: H-{PILoT} (Version 1.9)},
AUTHOR = {Ihlemann, Carsten and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR61},
LOCALID = {Local-ID: C125716C0050FB51-5F53450808E13ED9C125778C00501AE6-Ihlemann-Sofronie-Stokkermans-atr61-2010},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {This system description provides an overview of H-PILoT (Hierarchical Proving by Instantiation in Local Theory extensions), a program for hierarchical reasoning in extensions of logical theories. H-PILoT reduces deduction problems in the theory extension to deduction problems in the base theory. Specialized provers and standard SMT solvers can be used for testing the satisfiability of the formulae obtained after the reduction. For a certain type of theory extension (namely for {\em local theory extensions}) this hierarchical reduction is sound and complete and -- if the formulae obtained this way belong to a fragment decidable in the base theory -- H-PILoT provides a decision procedure for testing satisfiability of ground formulae, and can also be used for model generation.},
TYPE = {AVACS Technical Report},
VOLUME = {61},
}
Endnote
%0 Report
%A Ihlemann, Carsten
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T System Description: H-PILoT (Version 1.9) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14C5-2
%F EDOC: 536340
%F OTHER: Local-ID: C125716C0050FB51-5F53450808E13ED9C125778C00501AE6-Ihlemann-Sofronie-Stokkermans-atr61-2010
%Y SFB/TR 14 AVACS
%D 2010
%P 45 p.
%X This system description provides an overview of H-PILoT
(Hierarchical Proving by Instantiation in Local Theory
extensions), a program for hierarchical reasoning in
extensions of logical theories.
H-PILoT reduces deduction problems in the theory extension
to deduction problems in the base theory.
Specialized provers and standard SMT solvers can be used
for testing the satisfiability of the formulae obtained
after the reduction. For a certain type of theory extension
(namely for {\em local theory extensions}) this
hierarchical reduction is sound and complete and --
if the formulae obtained this way belong to a fragment
decidable in the base theory -- H-PILoT provides a decision
procedure for testing satisfiability of ground formulae,
and can also be used for model generation.
%B AVACS Technical Report
%N 61
%@ false
Query Evaluation with Asymmetric Web Services
N. Preda, F. Suchanek, W. Yuan and G. Weikum
Technical Report, 2010
N. Preda, F. Suchanek, W. Yuan and G. Weikum
Technical Report, 2010
Export
BibTeX
@techreport{PredaSuchanekYuanWeikum2011,
TITLE = {Query Evaluation with Asymmetric Web Services},
AUTHOR = {Preda, Nicoleta and Suchanek, F. and Yuan, Wenjun and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-004},
NUMBER = {MPI-I-2010-5-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Preda, Nicoleta
%A Suchanek, F.
%A Yuan, Wenjun
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Query Evaluation with Asymmetric Web Services :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-659D-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 31 p.
%B Research Report
Bonsai: Growing Interesting Small Trees
S. Seufert, S. Bedathur, J. Mestre and G. Weikum
Technical Report, 2010
S. Seufert, S. Bedathur, J. Mestre and G. Weikum
Technical Report, 2010
Abstract
Graphs are increasingly used to model a variety of loosely structured data such
as biological or social networks and entity-relationships. Given this profusion
of large-scale graph data, efficiently discovering interesting substructures
buried
within is essential. These substructures are typically used in determining
subsequent actions, such as conducting visual analytics by humans or designing
expensive biomedical experiments. In such settings, it is often desirable to
constrain the size of the discovered results in order to directly control the
associated costs. In this report, we address the problem of finding
cardinality-constrained connected
subtrees from large node-weighted graphs that maximize the sum of weights of
selected nodes. We provide an efficient constant-factor approximation algorithm
for this strongly NP-hard problem. Our techniques can be applied in a wide
variety
of application settings, for example in differential analysis of graphs, a
problem that frequently arises in bioinformatics but also has applications on
the web.
Export
BibTeX
@techreport{Seufert2010a,
TITLE = {Bonsai: Growing Interesting Small Trees},
AUTHOR = {Seufert, Stephan and Bedathur, Srikanta and Mestre, Julian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005},
NUMBER = {MPI-I-2010-5-005},
LOCALID = {Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on the web.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Seufert, Stephan
%A Bedathur, Srikanta
%A Mestre, Julian
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Bonsai: Growing Interesting Small Trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14D8-7
%F EDOC: 536383
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005
%F OTHER: Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 32 p.
%X Graphs are increasingly used to model a variety of loosely structured data such
as biological or social networks and entity-relationships. Given this profusion
of large-scale graph data, efficiently discovering interesting substructures
buried
within is essential. These substructures are typically used in determining
subsequent actions, such as conducting visual analytics by humans or designing
expensive biomedical experiments. In such settings, it is often desirable to
constrain the size of the discovered results in order to directly control the
associated costs. In this report, we address the problem of finding
cardinality-constrained connected
subtrees from large node-weighted graphs that maximize the sum of weights of
selected nodes. We provide an efficient constant-factor approximation algorithm
for this strongly NP-hard problem. Our techniques can be applied in a wide
variety
of application settings, for example in differential analysis of graphs, a
problem that frequently arises in bioinformatics but also has applications on
the web.
%B Research Report
On the saturation of YAGO
M. Suda, C. Weidenbach and P. Wischnewski
Technical Report, 2010
M. Suda, C. Weidenbach and P. Wischnewski
Technical Report, 2010
Export
BibTeX
@techreport{SudaWischnewski2010,
TITLE = {On the saturation of {YAGO}},
AUTHOR = {Suda, Martin and Weidenbach, Christoph and Wischnewski, Patrick},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-RG1-001},
NUMBER = {MPI-I-2010-RG1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Suda, Martin
%A Weidenbach, Christoph
%A Wischnewski, Patrick
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T On the saturation of YAGO :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6584-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-RG1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 50 p.
%B Research Report
A Bayesian Approach to Manifold Topology Reconstruction
A. Tevs, M. Wand, I. Ihrke and H.-P. Seidel
Technical Report, 2010
A. Tevs, M. Wand, I. Ihrke and H.-P. Seidel
Technical Report, 2010
Abstract
In this paper, we investigate the problem of statistical reconstruction of
piecewise linear manifold topology. Given a noisy, probably undersampled point
cloud from a one- or two-manifold, the algorithm reconstructs an approximated
most likely mesh in a Bayesian sense from which the sample might have been
taken. We incorporate statistical priors on the object geometry to improve the
reconstruction quality if additional knowledge about the class of original
shapes is available. The priors can be formulated analytically or learned from
example geometry with known manifold tessellation. The statistical objective
function is approximated by a linear programming / integer programming problem,
for which a globally optimal solution is found. We apply the algorithm to a set
of 2D and 3D reconstruction examples, demon-strating that a statistics-based
manifold reconstruction is feasible, and still yields plausible results in
situations where sampling conditions are violated.
Export
BibTeX
@techreport{TevsTechReport2009,
TITLE = {A Bayesian Approach to Manifold Topology Reconstruction},
AUTHOR = {Tevs, Art and Wand, Michael and Ihrke, Ivo and Seidel, Hans-Peter},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Tevs, Art
%A Wand, Michael
%A Ihrke, Ivo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A Bayesian Approach to Manifold Topology Reconstruction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1722-7
%F EDOC: 537282
%@ 0946-011X
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 23 p.
%X In this paper, we investigate the problem of statistical reconstruction of
piecewise linear manifold topology. Given a noisy, probably undersampled point
cloud from a one- or two-manifold, the algorithm reconstructs an approximated
most likely mesh in a Bayesian sense from which the sample might have been
taken. We incorporate statistical priors on the object geometry to improve the
reconstruction quality if additional knowledge about the class of original
shapes is available. The priors can be formulated analytically or learned from
example geometry with known manifold tessellation. The statistical objective
function is approximated by a linear programming / integer programming problem,
for which a globally optimal solution is found. We apply the algorithm to a set
of 2D and 3D reconstruction examples, demon-strating that a statistics-based
manifold reconstruction is feasible, and still yields plausible results in
situations where sampling conditions are violated.
%B Research Report
URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules
M. Theobald, M. Sozio, F. Suchanek and N. Nakashole
Technical Report, 2010
M. Theobald, M. Sozio, F. Suchanek and N. Nakashole
Technical Report, 2010
Abstract
We present URDF, an efficient reasoning framework for graph-based, nonschematic
RDF knowledge bases and SPARQL-like queries. URDF augments
first-order reasoning by a combination of soft rules, with Datalog-style
recursive
implications, and hard rules, in the shape of mutually exclusive sets of facts.
It incorporates
the common possible worlds semantics with independent base facts as
it is prevalent in most probabilistic database approaches, but also supports
semantically
more expressive, probabilistic first-order representations such as Markov
Logic Networks.
As knowledge extraction on theWeb often is an iterative (and inherently noisy)
process, URDF explicitly targets the resolution of inconsistencies between the
underlying
RDF base facts and the inference rules. Core of our approach is a novel
and efficient approximation algorithm for a generalized version of the Weighted
MAX-SAT problem, allowing us to dynamically resolve such inconsistencies
directly
at query processing time. Our MAX-SAT algorithm has a worst-case running
time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded
soft and hard rules, respectively, and it comes with tight approximation
guarantees
with respect to the shape of the rules and the distribution of confidences of
facts
they contain. Experiments over various benchmark settings confirm a high
robustness
and significantly improved runtime of our reasoning framework in comparison
to state-of-the-art techniques for MCMC sampling such as MAP inference
and MC-SAT.
Keywords
Export
BibTeX
@techreport{urdf-tr-2010,
TITLE = {{URDF}: Efficient Reasoning in Uncertain {RDF} Knowledge Bases with Soft and Hard Rules},
AUTHOR = {Theobald, Martin and Sozio, Mauro and Suchanek, Fabian and Nakashole, Ndapandula},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-002},
NUMBER = {MPI-I-2010-5-002},
LOCALID = {Local-ID: C1256DBF005F876D-4F6C2407136ECAA6C125770E003634BE-urdf-tr-2010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {We present URDF, an efficient reasoning framework for graph-based, nonschematic RDF knowledge bases and SPARQL-like queries. URDF augments first-order reasoning by a combination of soft rules, with Datalog-style recursive implications, and hard rules, in the shape of mutually exclusive sets of facts. It incorporates the common possible worlds semantics with independent base facts as it is prevalent in most probabilistic database approaches, but also supports semantically more expressive, probabilistic first-order representations such as Markov Logic Networks. As knowledge extraction on theWeb often is an iterative (and inherently noisy) process, URDF explicitly targets the resolution of inconsistencies between the underlying RDF base facts and the inference rules. Core of our approach is a novel and efficient approximation algorithm for a generalized version of the Weighted MAX-SAT problem, allowing us to dynamically resolve such inconsistencies directly at query processing time. Our MAX-SAT algorithm has a worst-case running time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded soft and hard rules, respectively, and it comes with tight approximation guarantees with respect to the shape of the rules and the distribution of confidences of facts they contain. Experiments over various benchmark settings confirm a high robustness and significantly improved runtime of our reasoning framework in comparison to state-of-the-art techniques for MCMC sampling such as MAP inference and MC-SAT. Keywords},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Theobald, Martin
%A Sozio, Mauro
%A Suchanek, Fabian
%A Nakashole, Ndapandula
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1556-3
%F EDOC: 536366
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-002
%F OTHER: Local-ID: C1256DBF005F876D-4F6C2407136ECAA6C125770E003634BE-urdf-tr-2010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 48 p.
%X We present URDF, an efficient reasoning framework for graph-based, nonschematic
RDF knowledge bases and SPARQL-like queries. URDF augments
first-order reasoning by a combination of soft rules, with Datalog-style
recursive
implications, and hard rules, in the shape of mutually exclusive sets of facts.
It incorporates
the common possible worlds semantics with independent base facts as
it is prevalent in most probabilistic database approaches, but also supports
semantically
more expressive, probabilistic first-order representations such as Markov
Logic Networks.
As knowledge extraction on theWeb often is an iterative (and inherently noisy)
process, URDF explicitly targets the resolution of inconsistencies between the
underlying
RDF base facts and the inference rules. Core of our approach is a novel
and efficient approximation algorithm for a generalized version of the Weighted
MAX-SAT problem, allowing us to dynamically resolve such inconsistencies
directly
at query processing time. Our MAX-SAT algorithm has a worst-case running
time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded
soft and hard rules, respectively, and it comes with tight approximation
guarantees
with respect to the shape of the rules and the distribution of confidences of
facts
they contain. Experiments over various benchmark settings confirm a high
robustness
and significantly improved runtime of our reasoning framework in comparison
to state-of-the-art techniques for MCMC sampling such as MAP inference
and MC-SAT.
Keywords
%B Research Report
2009
Scalable Phrase Mining for Ad-hoc Text Analytics
S. Bedathur, K. Berberich, J. Dittrich, N. Mamoulis and G. Weikum
Technical Report, 2009
S. Bedathur, K. Berberich, J. Dittrich, N. Mamoulis and G. Weikum
Technical Report, 2009
Export
BibTeX
@techreport{BedathurBerberichDittrichMamoulisWeikum2009,
TITLE = {Scalable Phrase Mining for Ad-hoc Text Analytics},
AUTHOR = {Bedathur, Srikanta and Berberich, Klaus and Dittrich, Jens and Mamoulis, Nikos and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-5-006},
LOCALID = {Local-ID: C1256DBF005F876D-4E35301DBC58B9F7C12575A00044A942-TechReport-BBDMW2009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Bedathur, Srikanta
%A Berberich, Klaus
%A Dittrich, Jens
%A Mamoulis, Nikos
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Scalable Phrase Mining for Ad-hoc Text Analytics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-194A-0
%F EDOC: 520425
%@ 0946-011X
%F OTHER: Local-ID: C1256DBF005F876D-4E35301DBC58B9F7C12575A00044A942-TechReport-BBDMW2009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 41 p.
%B Research Report
Generalized intrinsic symmetry detection
A. Berner, M. Bokeloh, M. Wand, A. Schilling and H.-P. Seidel
Technical Report, 2009
A. Berner, M. Bokeloh, M. Wand, A. Schilling and H.-P. Seidel
Technical Report, 2009
Abstract
In this paper, we address the problem of detecting partial symmetries in
3D objects. In contrast to previous work, our algorithm is able to match
deformed symmetric parts: We first develop an algorithm for the case of
approximately isometric deformations, based on matching graphs of
surface feature lines that are annotated with intrinsic geometric
properties. The sensitivity to non-isometry is controlled by tolerance
parameters for each such annotation. Using large tolerance values for
some of these annotations and a robust matching of the graph topology
yields a more general symmetry detection algorithm that can detect
similarities in structures that have undergone strong deformations. This
approach for the first time allows for detecting partial intrinsic as
well as more general, non-isometric symmetries. We evaluate the
recognition performance of our technique for a number synthetic and
real-world scanner data sets.
Export
BibTeX
@techreport{BernerBokelohWandSchillingSeidel2009,
TITLE = {Generalized intrinsic symmetry detection},
AUTHOR = {Berner, Alexander and Bokeloh, Martin and Wand, Martin and Schilling, Andreas and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-005},
NUMBER = {MPI-I-2009-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {In this paper, we address the problem of detecting partial symmetries in 3D objects. In contrast to previous work, our algorithm is able to match deformed symmetric parts: We first develop an algorithm for the case of approximately isometric deformations, based on matching graphs of surface feature lines that are annotated with intrinsic geometric properties. The sensitivity to non-isometry is controlled by tolerance parameters for each such annotation. Using large tolerance values for some of these annotations and a robust matching of the graph topology yields a more general symmetry detection algorithm that can detect similarities in structures that have undergone strong deformations. This approach for the first time allows for detecting partial intrinsic as well as more general, non-isometric symmetries. We evaluate the recognition performance of our technique for a number synthetic and real-world scanner data sets.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Berner, Alexander
%A Bokeloh, Martin
%A Wand, Martin
%A Schilling, Andreas
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Generalized intrinsic symmetry detection :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-666B-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 33 p.
%X In this paper, we address the problem of detecting partial symmetries in
3D objects. In contrast to previous work, our algorithm is able to match
deformed symmetric parts: We first develop an algorithm for the case of
approximately isometric deformations, based on matching graphs of
surface feature lines that are annotated with intrinsic geometric
properties. The sensitivity to non-isometry is controlled by tolerance
parameters for each such annotation. Using large tolerance values for
some of these annotations and a robust matching of the graph topology
yields a more general symmetry detection algorithm that can detect
similarities in structures that have undergone strong deformations. This
approach for the first time allows for detecting partial intrinsic as
well as more general, non-isometric symmetries. We evaluate the
recognition performance of our technique for a number synthetic and
real-world scanner data sets.
%B Research Report / Max-Planck-Institut für Informatik
Towards a Universal Wordnet by Learning from Combined Evidenc
G. de Melo and G. Weikum
Technical Report, 2009
G. de Melo and G. Weikum
Technical Report, 2009
Abstract
Lexical databases are invaluable sources of knowledge about words and
their meanings,
with numerous applications in areas like NLP, IR, and AI.
We propose a methodology for the automatic construction of a large-scale
multilingual
lexical database where words of many languages are hierarchically
organized in terms of their
meanings and their semantic relations to other words. This resource is
bootstrapped from
WordNet, a well-known English-language resource. Our approach extends
WordNet with around
1.5 million meaning links for 800,000 words in over 200 languages,
drawing on evidence extracted
from a variety of resources including existing (monolingual) wordnets,
(mostly bilingual) translation
dictionaries, and parallel corpora.
Graph-based scoring functions and statistical learning techniques are
used to iteratively integrate
this information and build an output graph. Experiments show that this
wordnet has a high
level of precision and coverage, and that it can be useful in applied
tasks such as
cross-lingual text classification.
Export
BibTeX
@techreport{deMeloWeikum2009,
TITLE = {Towards a Universal Wordnet by Learning from Combined Evidenc},
AUTHOR = {de Melo, Gerard and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-5-005},
NUMBER = {MPI-I-2009-5-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous applications in areas like NLP, IR, and AI. We propose a methodology for the automatic construction of a large-scale multilingual lexical database where words of many languages are hierarchically organized in terms of their meanings and their semantic relations to other words. This resource is bootstrapped from WordNet, a well-known English-language resource. Our approach extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages, drawing on evidence extracted from a variety of resources including existing (monolingual) wordnets, (mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions and statistical learning techniques are used to iteratively integrate this information and build an output graph. Experiments show that this wordnet has a high level of precision and coverage, and that it can be useful in applied tasks such as cross-lingual text classification.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A de Melo, Gerard
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Towards a Universal Wordnet by Learning from Combined Evidenc :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-665C-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-5-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 32 p.
%X Lexical databases are invaluable sources of knowledge about words and
their meanings,
with numerous applications in areas like NLP, IR, and AI.
We propose a methodology for the automatic construction of a large-scale
multilingual
lexical database where words of many languages are hierarchically
organized in terms of their
meanings and their semantic relations to other words. This resource is
bootstrapped from
WordNet, a well-known English-language resource. Our approach extends
WordNet with around
1.5 million meaning links for 800,000 words in over 200 languages,
drawing on evidence extracted
from a variety of resources including existing (monolingual) wordnets,
(mostly bilingual) translation
dictionaries, and parallel corpora.
Graph-based scoring functions and statistical learning techniques are
used to iteratively integrate
this information and build an output graph. Experiments show that this
wordnet has a high
level of precision and coverage, and that it can be useful in applied
tasks such as
cross-lingual text classification.
%B Research Report
A shaped temporal filter camera
M. Fuchs, T. Chen, O. Wang, R. Raskar, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2009
M. Fuchs, T. Chen, O. Wang, R. Raskar, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2009
Export
BibTeX
@techreport{FuchsChenWangRaskarLenschSeidel2009,
TITLE = {A shaped temporal filter camera},
AUTHOR = {Fuchs, Martin and Chen, Tongbo and Wang, Oliver and Raskar, Ramesh and Lensch, Hendrik P. A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-003},
NUMBER = {MPI-I-2009-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fuchs, Martin
%A Chen, Tongbo
%A Wang, Oliver
%A Raskar, Ramesh
%A Lensch, Hendrik P. A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A shaped temporal filter camera :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-666E-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 25 p.
%B Research Report / Max-Planck-Institut für Informatik
MPI Informatics building model as data for your research
V. Havran, J. Zajac, J. Drahokoupil and H.-P. Seidel
Technical Report, 2009
V. Havran, J. Zajac, J. Drahokoupil and H.-P. Seidel
Technical Report, 2009
Abstract
In this report we describe the MPI Informatics building
model that provides the data of the Max-Planck-Institut
f\"{u}r Informatik (MPII) building. We present our
motivation for this work and its relationship to
reproducibility of a scientific research. We describe the
dataset acquisition and creation including geometry,
luminaires, surface reflectances, reference photographs etc.
needed to use this model in testing of algorithms. The
created dataset can be used in computer graphics and beyond,
in particular in global illumination algorithms with focus
on realistic and predictive image synthesis. Outside of
computer graphics, it can be used as general source of real
world geometry with an existing counterpart and hence also
suitable for computer vision.
Export
BibTeX
@techreport{HavranZajacDrahokoupilSeidel2009,
TITLE = {{MPI} Informatics building model as data for your research},
AUTHOR = {Havran, Vlastimil and Zajac, Jozef and Drahokoupil, Jiri and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-004},
NUMBER = {MPI-I-2009-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {In this report we describe the MPI Informatics building model that provides the data of the Max-Planck-Institut f\"{u}r Informatik (MPII) building. We present our motivation for this work and its relationship to reproducibility of a scientific research. We describe the dataset acquisition and creation including geometry, luminaires, surface reflectances, reference photographs etc. needed to use this model in testing of algorithms. The created dataset can be used in computer graphics and beyond, in particular in global illumination algorithms with focus on realistic and predictive image synthesis. Outside of computer graphics, it can be used as general source of real world geometry with an existing counterpart and hence also suitable for computer vision.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Havran, Vlastimil
%A Zajac, Jozef
%A Drahokoupil, Jiri
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T MPI Informatics building model as data for your research :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6665-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 113 p.
%X In this report we describe the MPI Informatics building
model that provides the data of the Max-Planck-Institut
f\"{u}r Informatik (MPII) building. We present our
motivation for this work and its relationship to
reproducibility of a scientific research. We describe the
dataset acquisition and creation including geometry,
luminaires, surface reflectances, reference photographs etc.
needed to use this model in testing of algorithms. The
created dataset can be used in computer graphics and beyond,
in particular in global illumination algorithms with focus
on realistic and predictive image synthesis. Outside of
computer graphics, it can be used as general source of real
world geometry with an existing counterpart and hence also
suitable for computer vision.
%B Research Report / Max-Planck-Institut für Informatik
Deciding the Inductive Validity of Forall Exists* Queries
M. Horbach and C. Weidenbach
Technical Report, 2009a
M. Horbach and C. Weidenbach
Technical Report, 2009a
Abstract
We present a new saturation-based decidability result for inductive validity.
Let $\Sigma$ be a finite signature in which all function symbols are at most
unary and let $N$ be a satisfiable Horn clause set without equality in which
all positive literals are linear.
If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause
class, then it is decidable whether a sentence of the form $\forall\exists^*
(A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
Export
BibTeX
@techreport{HorbachWeidenbach2009,
TITLE = {Deciding the Inductive Validity of Forall Exists* Queries},
AUTHOR = {Horbach, Matthias and Weidenbach, Christoph},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-RG1-001},
LOCALID = {Local-ID: C125716C0050FB51-F9BA0666A42B8463C12576AF002882D7-Horbach2009TR1},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {We present a new saturation-based decidability result for inductive validity. Let $\Sigma$ be a finite signature in which all function symbols are at most unary and let $N$ be a satisfiable Horn clause set without equality in which all positive literals are linear. If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause class, then it is decidable whether a sentence of the form $\forall\exists^* (A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Deciding the Inductive Validity of Forall Exists* Queries :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A51-3
%F EDOC: 521099
%F OTHER: Local-ID: C125716C0050FB51-F9BA0666A42B8463C12576AF002882D7-Horbach2009TR1
%D 2009
%X We present a new saturation-based decidability result for inductive validity.
Let $\Sigma$ be a finite signature in which all function symbols are at most
unary and let $N$ be a satisfiable Horn clause set without equality in which
all positive literals are linear.
If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause
class, then it is decidable whether a sentence of the form $\forall\exists^*
(A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
Superposition for Fixed Domains
M. Horbach and C. Weidenbach
Technical Report, 2009b
M. Horbach and C. Weidenbach
Technical Report, 2009b
Abstract
Superposition is an established decision procedure for a variety of first-order
logic theories represented by sets of clauses. A satisfiable theory, saturated
by superposition, implicitly defines a minimal term-generated model for the
theory.
Proving universal properties with respect to a saturated theory directly leads
to a modification of the minimal model's term-generated domain, as new Skolem
functions are introduced. For many applications, this is not desired.
Therefore, we propose the first superposition calculus that can explicitly
represent existentially quantified variables and can thus compute with respect
to a given domain. This calculus is sound and refutationally complete in the
limit for a first-order fixed domain semantics.
For saturated Horn theories and classes of positive formulas, we can even
employ the calculus to prove properties of the minimal model itself, going
beyond the scope of known superposition-based approaches.
Export
BibTeX
@techreport{Horbach2009TR2,
TITLE = {Superposition for Fixed Domains},
AUTHOR = {Horbach, Matthias and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-RG1-005},
LOCALID = {Local-ID: C125716C0050FB51-5DDBBB1B134360CFC12576AF0028D299-Horbach2009TR2},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal term-generated model for the theory. Proving universal properties with respect to a saturated theory directly leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired. Therefore, we propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given domain. This calculus is sound and refutationally complete in the limit for a first-order fixed domain semantics. For saturated Horn theories and classes of positive formulas, we can even employ the calculus to prove properties of the minimal model itself, going beyond the scope of known superposition-based approaches.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Superposition for Fixed Domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A71-C
%F EDOC: 521100
%F OTHER: Local-ID: C125716C0050FB51-5DDBBB1B134360CFC12576AF0028D299-Horbach2009TR2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 49 p.
%X Superposition is an established decision procedure for a variety of first-order
logic theories represented by sets of clauses. A satisfiable theory, saturated
by superposition, implicitly defines a minimal term-generated model for the
theory.
Proving universal properties with respect to a saturated theory directly leads
to a modification of the minimal model's term-generated domain, as new Skolem
functions are introduced. For many applications, this is not desired.
Therefore, we propose the first superposition calculus that can explicitly
represent existentially quantified variables and can thus compute with respect
to a given domain. This calculus is sound and refutationally complete in the
limit for a first-order fixed domain semantics.
For saturated Horn theories and classes of positive formulas, we can even
employ the calculus to prove properties of the minimal model itself, going
beyond the scope of known superposition-based approaches.
%B Research Report
%@ false
Decidability Results for Saturation-based Model Building
M. Horbach and C. Weidenbach
Technical Report, 2009c
M. Horbach and C. Weidenbach
Technical Report, 2009c
Abstract
Saturation-based calculi such as superposition can be
successfully instantiated to decision procedures for many decidable
fragments of first-order logic. In case of termination without
generating an empty clause, a saturated clause set implicitly represents
a minimal model for all clauses, based on the underlying term ordering
of the superposition calculus. In general, it is not decidable whether a
ground atom, a clause or even a formula holds in this minimal model of a
satisfiable saturated clause set.
Based on an extension of our superposition calculus for fixed domains
with syntactic disequality constraints in a non-equational setting, we
describe models given by ARM (Atomic Representations of term Models) or
DIG (Disjunctions of Implicit Generalizations) representations as
minimal models of finite saturated clause sets. This allows us to
present several new decidability results for validity in such models.
These results extend in particular the known decidability results for
ARM and DIG representations.
Export
BibTeX
@techreport{HorbachWeidenbach2010,
TITLE = {Decidability Results for Saturation-based Model Building},
AUTHOR = {Horbach, Matthias and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-RG1-004},
NUMBER = {MPI-I-2009-RG1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Saturation-based calculi such as superposition can be successfully instantiated to decision procedures for many decidable fragments of first-order logic. In case of termination without generating an empty clause, a saturated clause set implicitly represents a minimal model for all clauses, based on the underlying term ordering of the superposition calculus. In general, it is not decidable whether a ground atom, a clause or even a formula holds in this minimal model of a satisfiable saturated clause set. Based on an extension of our superposition calculus for fixed domains with syntactic disequality constraints in a non-equational setting, we describe models given by ARM (Atomic Representations of term Models) or DIG (Disjunctions of Implicit Generalizations) representations as minimal models of finite saturated clause sets. This allows us to present several new decidability results for validity in such models. These results extend in particular the known decidability results for ARM and DIG representations.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Decidability Results for Saturation-based Model Building :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6659-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-RG1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 38 p.
%X Saturation-based calculi such as superposition can be
successfully instantiated to decision procedures for many decidable
fragments of first-order logic. In case of termination without
generating an empty clause, a saturated clause set implicitly represents
a minimal model for all clauses, based on the underlying term ordering
of the superposition calculus. In general, it is not decidable whether a
ground atom, a clause or even a formula holds in this minimal model of a
satisfiable saturated clause set.
Based on an extension of our superposition calculus for fixed domains
with syntactic disequality constraints in a non-equational setting, we
describe models given by ARM (Atomic Representations of term Models) or
DIG (Disjunctions of Implicit Generalizations) representations as
minimal models of finite saturated clause sets. This allows us to
present several new decidability results for validity in such models.
These results extend in particular the known decidability results for
ARM and DIG representations.
%B Research Report
%@ false
Acquisition and analysis of bispectral bidirectional reflectance distribution functions
M. B. Hullin, B. Ajdin, J. Hanika, H.-P. Seidel, J. Kautz and H. P. A. Lensch
Technical Report, 2009
M. B. Hullin, B. Ajdin, J. Hanika, H.-P. Seidel, J. Kautz and H. P. A. Lensch
Technical Report, 2009
Abstract
In fluorescent materials, energy from a certain band of incident wavelengths is
reflected or reradiated at larger wavelengths, i.e. with lower energy per
photon. While fluorescent materials are common in everyday life, they have
received little attention in computer graphics. Especially, no bidirectional
reflectance measurements of fluorescent materials have been available so far. In
this paper, we develop the concept of a bispectral BRDF, which extends the
well-known concept of the bidirectional reflectance distribution function (BRDF)
to account for energy transfer between wavelengths. Using a bidirectional and
bispectral measurement setup, we acquire reflectance data of a variety of
fluorescent materials, including vehicle paints, paper and fabric. We show
bispectral renderings of the measured data and compare them with reduced
versions of the bispectral BRDF, including the traditional RGB vector valued
BRDF. Principal component analysis of the measured data reveals that for some
materials the fluorescent reradiation spectrum changes considerably over the
range of directions. We further show that bispectral BRDFs can be efficiently
acquired using an acquisition strategy based on principal components.
Export
BibTeX
@techreport{HullinAjdinHanikaSeidelKautzLensch2009,
TITLE = {Acquisition and analysis of bispectral bidirectional reflectance distribution functions},
AUTHOR = {Hullin, Matthias B. and Ajdin, Boris and Hanika, Johannes and Seidel, Hans-Peter and Kautz, Jan and Lensch, Hendrik P. A.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-001},
NUMBER = {MPI-I-2009-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {In fluorescent materials, energy from a certain band of incident wavelengths is reflected or reradiated at larger wavelengths, i.e. with lower energy per photon. While fluorescent materials are common in everyday life, they have received little attention in computer graphics. Especially, no bidirectional reflectance measurements of fluorescent materials have been available so far. In this paper, we develop the concept of a bispectral BRDF, which extends the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between wavelengths. Using a bidirectional and bispectral measurement setup, we acquire reflectance data of a variety of fluorescent materials, including vehicle paints, paper and fabric. We show bispectral renderings of the measured data and compare them with reduced versions of the bispectral BRDF, including the traditional RGB vector valued BRDF. Principal component analysis of the measured data reveals that for some materials the fluorescent reradiation spectrum changes considerably over the range of directions. We further show that bispectral BRDFs can be efficiently acquired using an acquisition strategy based on principal components.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hullin, Matthias B.
%A Ajdin, Boris
%A Hanika, Johannes
%A Seidel, Hans-Peter
%A Kautz, Jan
%A Lensch, Hendrik P. A.
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Acquisition and analysis of bispectral bidirectional reflectance distribution functions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6671-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 25 p.
%X In fluorescent materials, energy from a certain band of incident wavelengths is
reflected or reradiated at larger wavelengths, i.e. with lower energy per
photon. While fluorescent materials are common in everyday life, they have
received little attention in computer graphics. Especially, no bidirectional
reflectance measurements of fluorescent materials have been available so far. In
this paper, we develop the concept of a bispectral BRDF, which extends the
well-known concept of the bidirectional reflectance distribution function (BRDF)
to account for energy transfer between wavelengths. Using a bidirectional and
bispectral measurement setup, we acquire reflectance data of a variety of
fluorescent materials, including vehicle paints, paper and fabric. We show
bispectral renderings of the measured data and compare them with reduced
versions of the bispectral BRDF, including the traditional RGB vector valued
BRDF. Principal component analysis of the measured data reveals that for some
materials the fluorescent reradiation spectrum changes considerably over the
range of directions. We further show that bispectral BRDFs can be efficiently
acquired using an acquisition strategy based on principal components.
%B Research Report / Max-Planck-Institut für Informatik
MING: Mining Informative Entity-relationship Subgraphs
G. Kasneci, S. Elbassuoni and G. Weikum
Technical Report, 2009
G. Kasneci, S. Elbassuoni and G. Weikum
Technical Report, 2009
Export
BibTeX
@techreport{KasneciWeikumElbassuoni2009,
TITLE = {{MING}: Mining Informative Entity-relationship Subgraphs},
AUTHOR = {Kasneci, Gjergji and Elbassuoni, Shady and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-007},
LOCALID = {Local-ID: C1256DBF005F876D-E977DDB8EDAABEE6C12576320036DBD9-KasneciMING2009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Elbassuoni, Shady
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T MING: Mining Informative Entity-relationship Subgraphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1932-4
%F EDOC: 520416
%F OTHER: Local-ID: C1256DBF005F876D-E977DDB8EDAABEE6C12576320036DBD9-KasneciMING2009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 32 p.
%B Research Report
The RDF-3X Engine for Scalable Management of RDF Data
T. Neumann and G. Weikum
Technical Report, 2009
T. Neumann and G. Weikum
Technical Report, 2009
Abstract
RDF is a data model for schema-free structured information that is gaining
momentum in the context of Semantic-Web data, life sciences, and also Web 2.0
platforms. The ``pay-as-you-go'' nature of RDF and the flexible
pattern-matching capabilities of its query language SPARQL entail efficiency
and scalability challenges for complex queries including long join paths. This
paper presents the RDF-3X engine, an implementation of SPARQL that achieves
excellent performance by pursuing a RISC-style architecture with streamlined
indexing and query processing.
The physical design is identical for all RDF-3X databases regardless of their
workloads, and completely eliminates the need for index tuning by exhaustive
indexes for all permutations of subject-property-object triples and their
binary and unary projections. These indexes are highly compressed, and the
query processor can aggressively leverage fast merge joins with excellent
performance of processor caches. The query optimizer is able to choose optimal
join orders even for complex queries, with a cost model that includes
statistical synopses for entire join paths. Although RDF-3X is optimized for
queries, it also provides good support for efficient online updates by means of
a staging architecture: direct updates to the main database indexes are
deferred, and instead applied to compact differential indexes which are later
merged into the main indexes in a batched manner.
Experimental studies with several large-scale datasets with more than 50
million RDF triples and benchmark queries that include pattern matching,
manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform
the previously best alternatives by one or two orders of magnitude.
Export
BibTeX
@techreport{Neumann2009report1,
TITLE = {The {RDF}-3X Engine for Scalable Management of {RDF} Data},
AUTHOR = {Neumann, Thomas and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-5-003},
LOCALID = {Local-ID: C1256DBF005F876D-AD3DBAFA6FB90DD2C1257593002FF3DF-Neumann2009report1},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The ``pay-as-you-go'' nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Neumann, Thomas
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T The RDF-3X Engine for Scalable Management of RDF Data :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-195A-A
%F EDOC: 520381
%@ 0946-011X
%F OTHER: Local-ID: C1256DBF005F876D-AD3DBAFA6FB90DD2C1257593002FF3DF-Neumann2009report1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%X RDF is a data model for schema-free structured information that is gaining
momentum in the context of Semantic-Web data, life sciences, and also Web 2.0
platforms. The ``pay-as-you-go'' nature of RDF and the flexible
pattern-matching capabilities of its query language SPARQL entail efficiency
and scalability challenges for complex queries including long join paths. This
paper presents the RDF-3X engine, an implementation of SPARQL that achieves
excellent performance by pursuing a RISC-style architecture with streamlined
indexing and query processing.
The physical design is identical for all RDF-3X databases regardless of their
workloads, and completely eliminates the need for index tuning by exhaustive
indexes for all permutations of subject-property-object triples and their
binary and unary projections. These indexes are highly compressed, and the
query processor can aggressively leverage fast merge joins with excellent
performance of processor caches. The query optimizer is able to choose optimal
join orders even for complex queries, with a cost model that includes
statistical synopses for entire join paths. Although RDF-3X is optimized for
queries, it also provides good support for efficient online updates by means of
a staging architecture: direct updates to the main database indexes are
deferred, and instead applied to compact differential indexes which are later
merged into the main indexes in a batched manner.
Experimental studies with several large-scale datasets with more than 50
million RDF triples and benchmark queries that include pattern matching,
manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform
the previously best alternatives by one or two orders of magnitude.
%B Research Report
Coupling Knowledge Bases and Web Services for Active Knowledge
N. Preda, F. Suchanek, G. Kasneci, T. Neumann and G. Weikum
Technical Report, 2009
N. Preda, F. Suchanek, G. Kasneci, T. Neumann and G. Weikum
Technical Report, 2009
Abstract
We present ANGIE, a system that can answer user queries by combining
knowledge
from a local database with knowledge retrieved from Web services. If a user
poses a query that cannot be answered by the local database alone, ANGIE
calls
the appropriate Web services to retrieve the missing information. In
ANGIE,Web
services act as dynamic components of the knowledge base that deliver
knowledge
on demand. To the user, this is fully transparent; the dynamically acquired
knowledge is presented as if it were stored in the local knowledge base.
We have developed a RDF based model for declarative definition of functions
embedded in the local knowledge base. The results of available Web
services are
cast into RDF subgraphs. Parameter bindings are automatically constructed by
ANGIE, services are invoked, and the semi-structured information returned by
the services are dynamically integrated into the knowledge base
We have developed a query rewriting algorithm that determines one or more
function composition that need to be executed in order to evaluate a
SPARQL style
user query. The key idea is that the local knowledge base can be used to
guide the selection of values used as input parameters of function
calls. This is in
contrast to the conventional approaches in the literature which would
exhaustively
materialize all values that can be used as binding values for the input
parameters.
Export
BibTeX
@techreport{PredaSuchanekKasneciNeumannWeikum2009,
TITLE = {Coupling Knowledge Bases and Web Services for Active Knowledge},
AUTHOR = {Preda, Nicoleta and Suchanek, Fabian and Kasneci, Gjergji and Neumann, Thomas and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-004},
LOCALID = {Local-ID: C1256DBF005F876D-BF2AB4A39F925BC8C125759800444744-PredaSuchanekKasneciNeumannWeikum2009},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {We present ANGIE, a system that can answer user queries by combining knowledge from a local database with knowledge retrieved from Web services. If a user poses a query that cannot be answered by the local database alone, ANGIE calls the appropriate Web services to retrieve the missing information. In ANGIE,Web services act as dynamic components of the knowledge base that deliver knowledge on demand. To the user, this is fully transparent; the dynamically acquired knowledge is presented as if it were stored in the local knowledge base. We have developed a RDF based model for declarative definition of functions embedded in the local knowledge base. The results of available Web services are cast into RDF subgraphs. Parameter bindings are automatically constructed by ANGIE, services are invoked, and the semi-structured information returned by the services are dynamically integrated into the knowledge base We have developed a query rewriting algorithm that determines one or more function composition that need to be executed in order to evaluate a SPARQL style user query. The key idea is that the local knowledge base can be used to guide the selection of values used as input parameters of function calls. This is in contrast to the conventional approaches in the literature which would exhaustively materialize all values that can be used as binding values for the input parameters.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Preda, Nicoleta
%A Suchanek, Fabian
%A Kasneci, Gjergji
%A Neumann, Thomas
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Coupling Knowledge Bases and Web Services for Active Knowledge :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1901-1
%F EDOC: 520423
%F OTHER: Local-ID: C1256DBF005F876D-BF2AB4A39F925BC8C125759800444744-PredaSuchanekKasneciNeumannWeikum2009
%D 2009
%X We present ANGIE, a system that can answer user queries by combining
knowledge
from a local database with knowledge retrieved from Web services. If a user
poses a query that cannot be answered by the local database alone, ANGIE
calls
the appropriate Web services to retrieve the missing information. In
ANGIE,Web
services act as dynamic components of the knowledge base that deliver
knowledge
on demand. To the user, this is fully transparent; the dynamically acquired
knowledge is presented as if it were stored in the local knowledge base.
We have developed a RDF based model for declarative definition of functions
embedded in the local knowledge base. The results of available Web
services are
cast into RDF subgraphs. Parameter bindings are automatically constructed by
ANGIE, services are invoked, and the semi-structured information returned by
the services are dynamically integrated into the knowledge base
We have developed a query rewriting algorithm that determines one or more
function composition that need to be executed in order to evaluate a
SPARQL style
user query. The key idea is that the local knowledge base can be used to
guide the selection of values used as input parameters of function
calls. This is in
contrast to the conventional approaches in the literature which would
exhaustively
materialize all values that can be used as binding values for the input
parameters.
%B Research Reports
Generating Concise and Readable Summaries of XML documents
M. Ramanath, K. Sarath Kumar and G. Ifrim
Technical Report, 2009
M. Ramanath, K. Sarath Kumar and G. Ifrim
Technical Report, 2009
Export
BibTeX
@techreport{Ramanath2008a,
TITLE = {Generating Concise and Readable Summaries of {XML} documents},
AUTHOR = {Ramanath, Maya and Sarath Kumar, Kondreddi and Ifrim, Georgiana},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-002},
LOCALID = {Local-ID: C1256DBF005F876D-EA355A84178BB514C12575BA002A90E0-Ramanath2008},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Ramanath, Maya
%A Sarath Kumar, Kondreddi
%A Ifrim, Georgiana
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Generating Concise and Readable Summaries of XML documents :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1915-6
%F EDOC: 520419
%F OTHER: Local-ID: C1256DBF005F876D-EA355A84178BB514C12575BA002A90E0-Ramanath2008
%D 2009
%B Research Reports
Constraint Solving for Interpolation
A. Rybalchenko and V. Sofronie-Stokkermans
Technical Report, 2009
A. Rybalchenko and V. Sofronie-Stokkermans
Technical Report, 2009
Export
BibTeX
@techreport{Rybalchenko-Sofronie-Stokkermans-2009,
TITLE = {Constraint Solving for Interpolation},
AUTHOR = {Rybalchenko, Andrey and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
LOCALID = {Local-ID: C125716C0050FB51-7BE33255DCBCF2AAC1257650004B7C65-Rybalchenko-Sofronie-Stokkermans-2009},
YEAR = {2009},
DATE = {2009},
}
Endnote
%0 Report
%A Rybalchenko, Andrey
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Constraint Solving for Interpolation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A4A-6
%F EDOC: 521091
%F OTHER: Local-ID: C125716C0050FB51-7BE33255DCBCF2AAC1257650004B7C65-Rybalchenko-Sofronie-Stokkermans-2009
%D 2009
A Higher-order Structure Tensor
T. Schultz, J. Weickert and H.-P. Seidel
Technical Report, 2009
T. Schultz, J. Weickert and H.-P. Seidel
Technical Report, 2009
Abstract
Structure tensors are a common tool for orientation estimation in
image processing and computer vision. We present a generalization of
the traditional second-order model to a higher-order structure
tensor (HOST), which is able to model more than one significant
orientation, as found in corners, junctions, and multi-channel images. We
provide a theoretical analysis and a number of mathematical tools
that facilitate practical use of the HOST, visualize it using a
novel glyph for higher-order tensors, and demonstrate how it can be
applied in an improved integrated edge, corner, and junction
Export
BibTeX
@techreport{SchultzlWeickertSeidel2007,
TITLE = {A Higher-order Structure Tensor},
AUTHOR = {Schultz, Thomas and Weickert, Joachim and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2007-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Structure tensors are a common tool for orientation estimation in image processing and computer vision. We present a generalization of the traditional second-order model to a higher-order structure tensor (HOST), which is able to model more than one significant orientation, as found in corners, junctions, and multi-channel images. We provide a theoretical analysis and a number of mathematical tools that facilitate practical use of the HOST, visualize it using a novel glyph for higher-order tensors, and demonstrate how it can be applied in an improved integrated edge, corner, and junction},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Schultz, Thomas
%A Weickert, Joachim
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T A Higher-order Structure Tensor :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-13BC-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%X Structure tensors are a common tool for orientation estimation in
image processing and computer vision. We present a generalization of
the traditional second-order model to a higher-order structure
tensor (HOST), which is able to model more than one significant
orientation, as found in corners, junctions, and multi-channel images. We
provide a theoretical analysis and a number of mathematical tools
that facilitate practical use of the HOST, visualize it using a
novel glyph for higher-order tensors, and demonstrate how it can be
applied in an improved integrated edge, corner, and junction
%B Research Report
Optical reconstruction of detailed animatable human body models
C. Stoll
Technical Report, 2009
C. Stoll
Technical Report, 2009
Export
BibTeX
@techreport{Stoll2009,
TITLE = {Optical reconstruction of detailed animatable human body models},
AUTHOR = {Stoll, Carsten},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-006},
NUMBER = {MPI-I-2009-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Stoll, Carsten
%+ Computer Graphics, MPI for Informatics, Max Planck Society
%T Optical reconstruction of detailed animatable human body models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-665F-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 37 p.
%B Research Report / Max-Planck-Institut für Informatik
Contextual Rewriting
C. Weidenbach and P. Wischnewski
Technical Report, 2009
C. Weidenbach and P. Wischnewski
Technical Report, 2009
Export
BibTeX
@techreport{WischnewskiWeidenbach2009,
TITLE = {Contextual Rewriting},
AUTHOR = {Weidenbach, Christoph and Wischnewski, Patrick},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-RG1-002},
LOCALID = {Local-ID: C125716C0050FB51-DD89BAB0441DE797C125757F0034B8CB-WeidenbachWischnewskiReport2009},
YEAR = {2009},
DATE = {2009},
}
Endnote
%0 Report
%A Weidenbach, Christoph
%A Wischnewski, Patrick
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Contextual Rewriting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A4C-2
%F EDOC: 521106
%F OTHER: Local-ID: C125716C0050FB51-DD89BAB0441DE797C125757F0034B8CB-WeidenbachWischnewskiReport2009
%D 2009
2008
Characterizing the performance of Flash memory storage devices and its impact on algorithm design
D. Ajwani, I. Malinger, U. Meyer and S. Toledo
Technical Report, 2008
D. Ajwani, I. Malinger, U. Meyer and S. Toledo
Technical Report, 2008
Abstract
Initially used in digital audio players, digital cameras, mobile
phones, and USB memory sticks, flash memory may become the dominant
form of end-user storage in mobile computing, either completely
replacing the magnetic hard disks or being an additional secondary
storage. We study the design of algorithms and data structures that
can exploit the flash memory devices better. For this, we characterize
the performance of NAND flash based storage devices, including many
solid state disks. We show that these devices have better random read
performance than hard disks, but much worse random write performance.
We also analyze the effect of misalignments, aging and past I/O
patterns etc. on the performance obtained on these devices. We show
that despite the similarities between flash memory and RAM (fast
random reads) and between flash disk and hard disk (both are block
based devices), the algorithms designed in the RAM model or the
external memory model do not realize the full potential of the flash
memory devices. We later give some broad guidelines for designing
algorithms which can exploit the comparative advantages of both a
flash memory device and a hard disk, when used together.
Export
BibTeX
@techreport{AjwaniMalingerMeyerToledo2008,
TITLE = {Characterizing the performance of Flash memory storage devices and its impact on algorithm design},
AUTHOR = {Ajwani, Deepak and Malinger, Itay and Meyer, Ulrich and Toledo, Sivan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001},
NUMBER = {MPI-I-2008-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Ajwani, Deepak
%A Malinger, Itay
%A Meyer, Ulrich
%A Toledo, Sivan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Characterizing the performance of Flash memory storage devices and its impact on algorithm design :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66C7-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 36 p.
%X Initially used in digital audio players, digital cameras, mobile
phones, and USB memory sticks, flash memory may become the dominant
form of end-user storage in mobile computing, either completely
replacing the magnetic hard disks or being an additional secondary
storage. We study the design of algorithms and data structures that
can exploit the flash memory devices better. For this, we characterize
the performance of NAND flash based storage devices, including many
solid state disks. We show that these devices have better random read
performance than hard disks, but much worse random write performance.
We also analyze the effect of misalignments, aging and past I/O
patterns etc. on the performance obtained on these devices. We show
that despite the similarities between flash memory and RAM (fast
random reads) and between flash disk and hard disk (both are block
based devices), the algorithms designed in the RAM model or the
external memory model do not realize the full potential of the flash
memory devices. We later give some broad guidelines for designing
algorithms which can exploit the comparative advantages of both a
flash memory device and a hard disk, when used together.
%B Research Report
Prototype Implementation of the Algebraic Kernel
E. Berberich, M. Hemmer, M. Karavelas, S. Pion, M. Teillaud and E. Tsigaridas
Technical Report, 2008
E. Berberich, M. Hemmer, M. Karavelas, S. Pion, M. Teillaud and E. Tsigaridas
Technical Report, 2008
Abstract
In this report we describe the current progress with respect to prototype
implementations of algebraic kernels within the ACS project. More specifically,
we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at
providing the necessary algebraic functionality required for treating circular
arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the
EXACUS project) which is a prototype implementation of a set of algebraic tools
on univariate polynomials, needed to built an algebraic kernel and (4) a rough
CGAL-like prototype implementation of a set of algebraic tools on univariate
polynomials.
Export
BibTeX
@techreport{ACS-TR-121202-01,
TITLE = {Prototype Implementation of the Algebraic Kernel},
AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos and Pion, Sylvain and Teillaud, Monique and Tsigaridas, Elias},
LANGUAGE = {eng},
NUMBER = {ACS-TR-121202-01},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%A Karavelas, Menelaos
%A Pion, Sylvain
%A Teillaud, Monique
%A Tsigaridas, Elias
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Prototype Implementation of the Algebraic Kernel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E387-2
%Y University of Groningen
%C Groningen
%D 2008
%X In this report we describe the current progress with respect to prototype
implementations of algebraic kernels within the ACS project. More specifically,
we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at
providing the necessary algebraic functionality required for treating circular
arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the
EXACUS project) which is a prototype implementation of a set of algebraic tools
on univariate polynomials, needed to built an algebraic kernel and (4) a rough
CGAL-like prototype implementation of a set of algebraic tools on univariate
polynomials.
%U http://www.researchgate.net/publication/254300442_Prototype_implementation_of_the_algebraic_kernel
Slippage Features
M. Bokeloh, A. Berner, M. Wand, H.-P. Seidel and A. Schilling
Technical Report, 2008
M. Bokeloh, A. Berner, M. Wand, H.-P. Seidel and A. Schilling
Technical Report, 2008
Export
BibTeX
@techreport{Bokeloh2008,
TITLE = {Slippage Features},
AUTHOR = {Bokeloh, Martin and Berner, Alexander and Wand, Michael and Seidel, Hans-Peter and Schilling, Andreas},
LANGUAGE = {eng},
ISSN = {0946-3852},
URL = {urn:nbn:de:bsz:21-opus-33880},
NUMBER = {WSI-2008-03},
INSTITUTION = {Wilhelm-Schickard-Institut / Universit{\"a}t T{\"u}bingen},
ADDRESS = {T{\"u}bingen},
YEAR = {2008},
DATE = {2008},
TYPE = {WSI},
VOLUME = {2008-03},
}
Endnote
%0 Report
%A Bokeloh, Martin
%A Berner, Alexander
%A Wand, Michael
%A Seidel, Hans-Peter
%A Schilling, Andreas
%+ External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
%T Slippage Features :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0023-D3FC-F
%U urn:nbn:de:bsz:21-opus-33880
%Y Wilhelm-Schickard-Institut / Universität Tübingen
%C Tübingen
%D 2008
%P 17 p.
%B WSI
%N 2008-03
%@ false
%U http://nbn-resolving.de/urn:nbn:de:bsz:21-opus-33880
Data Modifications and Versioning in Trio
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2008
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2008
Export
BibTeX
@techreport{ilpubs-849,
TITLE = {Data Modifications and Versioning in Trio},
AUTHOR = {Das Sarma, Anish and Theobald, Martin and Widom, Jennifer},
LANGUAGE = {eng},
URL = {http://ilpubs.stanford.edu:8090/849/},
NUMBER = {ILPUBS-849},
INSTITUTION = {Standford University Infolab},
ADDRESS = {Standford, CA},
YEAR = {2008},
TYPE = {Technical Report},
}
Endnote
%0 Report
%A Das Sarma, Anish
%A Theobald, Martin
%A Widom, Jennifer
%+ External Organizations
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Data Modifications and Versioning in Trio :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-AED6-D
%U http://ilpubs.stanford.edu:8090/849/
%Y Standford University Infolab
%C Standford, CA
%D 2008
%B Technical Report
Integrating Yago into the suggested upper merged ontology
G. de Melo, F. Suchanek and A. Pease
Technical Report, 2008
G. de Melo, F. Suchanek and A. Pease
Technical Report, 2008
Abstract
Ontologies are becoming more and more popular as background knowledge
for intelligent applications.
Up to now, there has been a schism between manually assembled, highly
axiomatic ontologies
and large, automatically constructed knowledge bases.
This report discusses how the two worlds can be brought together by
combining the high-level axiomatizations from
the Standard Upper Merged Ontology (SUMO) with the extensive world
knowledge of the YAGO ontology.
On the theoretical side, it analyses the differences between the
knowledge representation in YAGO and SUMO.
On the practical side, this report explains how the two resources can
be merged. This yields a new
large-scale formal ontology, which provides information about millions
of entities such as people, cities,
organizations, and companies. This report is the detailed version of
our paper at ICTAI 2008.
Export
BibTeX
@techreport{deMeloSuchanekPease2008,
TITLE = {Integrating Yago into the suggested upper merged ontology},
AUTHOR = {de Melo, Gerard and Suchanek, Fabian and Pease, Adam},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-003},
NUMBER = {MPI-I-2008-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Ontologies are becoming more and more popular as background knowledge for intelligent applications. Up to now, there has been a schism between manually assembled, highly axiomatic ontologies and large, automatically constructed knowledge bases. This report discusses how the two worlds can be brought together by combining the high-level axiomatizations from the Standard Upper Merged Ontology (SUMO) with the extensive world knowledge of the YAGO ontology. On the theoretical side, it analyses the differences between the knowledge representation in YAGO and SUMO. On the practical side, this report explains how the two resources can be merged. This yields a new large-scale formal ontology, which provides information about millions of entities such as people, cities, organizations, and companies. This report is the detailed version of our paper at ICTAI 2008.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A de Melo, Gerard
%A Suchanek, Fabian
%A Pease, Adam
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Integrating Yago into the suggested upper merged ontology :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66AB-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 31 p.
%X Ontologies are becoming more and more popular as background knowledge
for intelligent applications.
Up to now, there has been a schism between manually assembled, highly
axiomatic ontologies
and large, automatically constructed knowledge bases.
This report discusses how the two worlds can be brought together by
combining the high-level axiomatizations from
the Standard Upper Merged Ontology (SUMO) with the extensive world
knowledge of the YAGO ontology.
On the theoretical side, it analyses the differences between the
knowledge representation in YAGO and SUMO.
On the practical side, this report explains how the two resources can
be merged. This yields a new
large-scale formal ontology, which provides information about millions
of entities such as people, cities,
organizations, and companies. This report is the detailed version of
our paper at ICTAI 2008.
%B Research Report / Max-Planck-Institut für Informatik
Labelled splitting
A. L. Fietzke and C. Weidenbach
Technical Report, 2008
A. L. Fietzke and C. Weidenbach
Technical Report, 2008
Abstract
We define a superposition calculus with explicit splitting and
an explicit, new backtracking rule on the basis of labelled clauses.
For the first time we show a superposition calculus with explicit
backtracking rule sound and complete. The new backtracking rule advances
backtracking with branch condensing known from SPASS.
An experimental evaluation of an implementation of the new rule
shows that it improves considerably the
previous SPASS splitting implementation.
Finally, we discuss the relationship between labelled first-order
splitting and DPLL style splitting with intelligent backtracking
and clause learning.
Export
BibTeX
@techreport{FietzkeWeidenbach2008,
TITLE = {Labelled splitting},
AUTHOR = {Fietzke, Arnaud Luc and Weidenbach, Christoph},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-RG1-001},
NUMBER = {MPI-I-2008-RG1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {We define a superposition calculus with explicit splitting and an explicit, new backtracking rule on the basis of labelled clauses. For the first time we show a superposition calculus with explicit backtracking rule sound and complete. The new backtracking rule advances backtracking with branch condensing known from SPASS. An experimental evaluation of an implementation of the new rule shows that it improves considerably the previous SPASS splitting implementation. Finally, we discuss the relationship between labelled first-order splitting and DPLL style splitting with intelligent backtracking and clause learning.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Fietzke, Arnaud Luc
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Labelled splitting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6674-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-RG1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 45 p.
%X We define a superposition calculus with explicit splitting and
an explicit, new backtracking rule on the basis of labelled clauses.
For the first time we show a superposition calculus with explicit
backtracking rule sound and complete. The new backtracking rule advances
backtracking with branch condensing known from SPASS.
An experimental evaluation of an implementation of the new rule
shows that it improves considerably the
previous SPASS splitting implementation.
Finally, we discuss the relationship between labelled first-order
splitting and DPLL style splitting with intelligent backtracking
and clause learning.
%B Research Report
STAR: Steiner tree approximation in relationship-graphs
G. Kasneci, M. Ramanath, M. Sozio, F. Suchanek and G. Weikum
Technical Report, 2008
G. Kasneci, M. Ramanath, M. Sozio, F. Suchanek and G. Weikum
Technical Report, 2008
Abstract
Large-scale graphs and networks are abundant in modern information systems:
entity-relationship graphs over relational data or Web-extracted entities,
biological networks, social online communities, knowledge bases, and
many more. Often such data comes with expressive node and edge labels that
allow an interpretation as a semantic graph, and edge weights that reflect
the strengths of semantic relations between entities. Finding close
relationships between a given set of two, three, or more entities is an
important building block for many search, ranking, and analysis tasks.
From an algorithmic point of view, this translates into computing the best
Steiner trees between the given nodes, a classical NP-hard problem. In
this paper, we present a new approximation algorithm, coined STAR, for
relationship queries over large graphs that do not fit into memory. We
prove that for n query entities, STAR yields an O(log(n))-approximation of
the optimal Steiner tree, and show that in practical cases the results
returned by STAR are qualitatively better than the results returned by a
classical 2-approximation algorithm. We then describe an extension to our
algorithm to return the top-k Steiner trees. Finally, we evaluate our
algorithm over both main-memory as well as completely disk-resident graphs
containing millions of nodes. Our experiments show that STAR outperforms
the best state-of-the returns qualitatively better results.
Export
BibTeX
@techreport{KasneciRamanathSozioSuchanekWeikum2008,
TITLE = {{STAR}: Steiner tree approximation in relationship-graphs},
AUTHOR = {Kasneci, Gjergji and Ramanath, Maya and Sozio, Mauro and Suchanek, Fabian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-001},
NUMBER = {MPI-I-2008-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Large-scale graphs and networks are abundant in modern information systems: entity-relationship graphs over relational data or Web-extracted entities, biological networks, social online communities, knowledge bases, and many more. Often such data comes with expressive node and edge labels that allow an interpretation as a semantic graph, and edge weights that reflect the strengths of semantic relations between entities. Finding close relationships between a given set of two, three, or more entities is an important building block for many search, ranking, and analysis tasks. From an algorithmic point of view, this translates into computing the best Steiner trees between the given nodes, a classical NP-hard problem. In this paper, we present a new approximation algorithm, coined STAR, for relationship queries over large graphs that do not fit into memory. We prove that for n query entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in practical cases the results returned by STAR are qualitatively better than the results returned by a classical 2-approximation algorithm. We then describe an extension to our algorithm to return the top-k Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the best state-of-the returns qualitatively better results.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Ramanath, Maya
%A Sozio, Mauro
%A Suchanek, Fabian
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T STAR: Steiner tree approximation in relationship-graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B3-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 37 p.
%X Large-scale graphs and networks are abundant in modern information systems:
entity-relationship graphs over relational data or Web-extracted entities,
biological networks, social online communities, knowledge bases, and
many more. Often such data comes with expressive node and edge labels that
allow an interpretation as a semantic graph, and edge weights that reflect
the strengths of semantic relations between entities. Finding close
relationships between a given set of two, three, or more entities is an
important building block for many search, ranking, and analysis tasks.
From an algorithmic point of view, this translates into computing the best
Steiner trees between the given nodes, a classical NP-hard problem. In
this paper, we present a new approximation algorithm, coined STAR, for
relationship queries over large graphs that do not fit into memory. We
prove that for n query entities, STAR yields an O(log(n))-approximation of
the optimal Steiner tree, and show that in practical cases the results
returned by STAR are qualitatively better than the results returned by a
classical 2-approximation algorithm. We then describe an extension to our
algorithm to return the top-k Steiner trees. Finally, we evaluate our
algorithm over both main-memory as well as completely disk-resident graphs
containing millions of nodes. Our experiments show that STAR outperforms
the best state-of-the returns qualitatively better results.
%B Research Report / Max-Planck-Institut für Informatik
Single phase construction of optimal DAG-structured QEPs
T. Neumann and G. Moerkotte
Technical Report, 2008
T. Neumann and G. Moerkotte
Technical Report, 2008
Abstract
Traditionally, database management systems use tree-structured query
evaluation plans. They are easy to implement but not expressive enough
for some optimizations like eliminating common algebraic subexpressions
or magic sets. These require directed acyclic graphs (DAGs), i.e.
shared subplans.
Existing approaches consider DAGs merely for special cases
and not in full generality.
We introduce a novel framework to reason about sharing of subplans
and, thus, DAG-structured query evaluation plans.
Then, we present the first plan generator capable
of generating optimal DAG-structured query evaluation plans.
The experimental results show that with no or only a modest
increase of plan generation time, a major reduction
of query execution time can be
achieved for common queries.
Export
BibTeX
@techreport{NeumannMoerkotte2008,
TITLE = {Single phase construction of optimal {DAG}-structured {QEPs}},
AUTHOR = {Neumann, Thomas and Moerkotte, Guido},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-002},
NUMBER = {MPI-I-2008-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Traditionally, database management systems use tree-structured query evaluation plans. They are easy to implement but not expressive enough for some optimizations like eliminating common algebraic subexpressions or magic sets. These require directed acyclic graphs (DAGs), i.e. shared subplans. Existing approaches consider DAGs merely for special cases and not in full generality. We introduce a novel framework to reason about sharing of subplans and, thus, DAG-structured query evaluation plans. Then, we present the first plan generator capable of generating optimal DAG-structured query evaluation plans. The experimental results show that with no or only a modest increase of plan generation time, a major reduction of query execution time can be achieved for common queries.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Neumann, Thomas
%A Moerkotte, Guido
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Single phase construction of optimal DAG-structured QEPs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B0-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 73 p.
%X Traditionally, database management systems use tree-structured query
evaluation plans. They are easy to implement but not expressive enough
for some optimizations like eliminating common algebraic subexpressions
or magic sets. These require directed acyclic graphs (DAGs), i.e.
shared subplans.
Existing approaches consider DAGs merely for special cases
and not in full generality.
We introduce a novel framework to reason about sharing of subplans
and, thus, DAG-structured query evaluation plans.
Then, we present the first plan generator capable
of generating optimal DAG-structured query evaluation plans.
The experimental results show that with no or only a modest
increase of plan generation time, a major reduction
of query execution time can be
achieved for common queries.
%B Research Report / Max-Planck-Institut für Informatik
Crease surfaces: from theory to extraction and application to diffusion tensor MRI
T. Schultz, H. Theisel and H.-P. Seidel
Technical Report, 2008
T. Schultz, H. Theisel and H.-P. Seidel
Technical Report, 2008
Abstract
Crease surfaces are two-dimensional manifolds along which a scalar
field assumes a local maximum (ridge) or a local minimum (valley) in
a constrained space. Unlike isosurfaces, they are able to capture
extremal structures in the data. Creases have a long tradition in
image processing and computer vision, and have recently become a
popular tool for visualization. When extracting crease surfaces,
degeneracies of the Hessian (i.e., lines along which two eigenvalues
are equal), have so far been ignored. We show that these loci,
however, have two important consequences for the topology of crease
surfaces: First, creases are bounded not only by a side constraint
on eigenvalue sign, but also by Hessian degeneracies. Second, crease
surfaces are not in general orientable. We describe an efficient
algorithm for the extraction of crease surfaces which takes these
insights into account and demonstrate that it produces more accurate
results than previous approaches. Finally, we show that DT-MRI
streamsurfaces, which were previously used for the analysis of
planar regions in diffusion tensor MRI data, are mathematically
ill-defined. As an example application of our method, creases in a
measure of planarity are presented as a viable substitute.
Export
BibTeX
@techreport{SchultzTheiselSeidel2008,
TITLE = {Crease surfaces: from theory to extraction and application to diffusion tensor {MRI}},
AUTHOR = {Schultz, Thomas and Theisel, Holger and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-003},
NUMBER = {MPI-I-2008-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Crease surfaces are two-dimensional manifolds along which a scalar field assumes a local maximum (ridge) or a local minimum (valley) in a constrained space. Unlike isosurfaces, they are able to capture extremal structures in the data. Creases have a long tradition in image processing and computer vision, and have recently become a popular tool for visualization. When extracting crease surfaces, degeneracies of the Hessian (i.e., lines along which two eigenvalues are equal), have so far been ignored. We show that these loci, however, have two important consequences for the topology of crease surfaces: First, creases are bounded not only by a side constraint on eigenvalue sign, but also by Hessian degeneracies. Second, crease surfaces are not in general orientable. We describe an efficient algorithm for the extraction of crease surfaces which takes these insights into account and demonstrate that it produces more accurate results than previous approaches. Finally, we show that DT-MRI streamsurfaces, which were previously used for the analysis of planar regions in diffusion tensor MRI data, are mathematically ill-defined. As an example application of our method, creases in a measure of planarity are presented as a viable substitute.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schultz, Thomas
%A Theisel, Holger
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Crease surfaces: from theory to extraction and application to diffusion tensor MRI :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B6-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 33 p.
%X Crease surfaces are two-dimensional manifolds along which a scalar
field assumes a local maximum (ridge) or a local minimum (valley) in
a constrained space. Unlike isosurfaces, they are able to capture
extremal structures in the data. Creases have a long tradition in
image processing and computer vision, and have recently become a
popular tool for visualization. When extracting crease surfaces,
degeneracies of the Hessian (i.e., lines along which two eigenvalues
are equal), have so far been ignored. We show that these loci,
however, have two important consequences for the topology of crease
surfaces: First, creases are bounded not only by a side constraint
on eigenvalue sign, but also by Hessian degeneracies. Second, crease
surfaces are not in general orientable. We describe an efficient
algorithm for the extraction of crease surfaces which takes these
insights into account and demonstrate that it produces more accurate
results than previous approaches. Finally, we show that DT-MRI
streamsurfaces, which were previously used for the analysis of
planar regions in diffusion tensor MRI data, are mathematically
ill-defined. As an example application of our method, creases in a
measure of planarity are presented as a viable substitute.
%B Research Report / Max-Planck-Institut für Informatik
Efficient Hierarchical Reasoning about Functions over Numerical Domains
V. Sofronie-Stokkermans
Technical Report, 2008a
V. Sofronie-Stokkermans
Technical Report, 2008a
Abstract
We show that many properties studied in mathematical
analysis (monotonicity, boundedness, inverse, Lipschitz
properties, possibly combined with continuity or derivability)
are expressible by formulae in a class for which sound and
complete hierarchical proof methods for testing satisfiability of
sets of ground clauses exist.
The results are useful for automated reasoning in mathematical
analysis and for the verification of hybrid systems.
Export
BibTeX
@techreport{Sofronie-Stokkermans-atr45-2008,
TITLE = {Efficient Hierarchical Reasoning about Functions over Numerical Domains},
AUTHOR = {Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR45},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {We show that many properties studied in mathematical analysis (monotonicity, boundedness, inverse, Lipschitz properties, possibly combined with continuity or derivability) are expressible by formulae in a class for which sound and complete hierarchical proof methods for testing satisfiability of sets of ground clauses exist. The results are useful for automated reasoning in mathematical analysis and for the verification of hybrid systems.},
TYPE = {AVACS Technical Report},
VOLUME = {45},
}
Endnote
%0 Report
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Efficient Hierarchical Reasoning about Functions over Numerical Domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-A46C-B
%Y SFB/TR 14 AVACS
%D 2008
%P 17 p.
%X We show that many properties studied in mathematical
analysis (monotonicity, boundedness, inverse, Lipschitz
properties, possibly combined with continuity or derivability)
are expressible by formulae in a class for which sound and
complete hierarchical proof methods for testing satisfiability of
sets of ground clauses exist.
The results are useful for automated reasoning in mathematical
analysis and for the verification of hybrid systems.
%B AVACS Technical Report
%N 45
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_045.pdf
Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems
V. Sofronie-Stokkermans
Technical Report, 2008b
V. Sofronie-Stokkermans
Technical Report, 2008b
Abstract
In this paper we show that states, transitions and behavior of
concurrent systems can often be modeled as sheaves over a
suitable topological space (where the topology expresses how the
interacting systems share the information). This allows us to use
results from categorical logic (and in particular geometric
logic) to describe which type of properties are transferred, if
valid locally in all component systems, also at a global level,
to the system obtained by interconnecting the individual systems.
The main area of application is to modular verification of
complex systems.
We illustrate the ideas by means of an example involving
a family of interacting controllers for trains on a rail track.
Export
BibTeX
@techreport{Sofronie-Stokkermans-atr46-2008,
TITLE = {Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems},
AUTHOR = {Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR46},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {In this paper we show that states, transitions and behavior of concurrent systems can often be modeled as sheaves over a suitable topological space (where the topology expresses how the interacting systems share the information). This allows us to use results from categorical logic (and in particular geometric logic) to describe which type of properties are transferred, if valid locally in all component systems, also at a global level, to the system obtained by interconnecting the individual systems. The main area of application is to modular verification of complex systems. We illustrate the ideas by means of an example involving a family of interacting controllers for trains on a rail track.},
TYPE = {AVACS Technical Report},
VOLUME = {46},
}
Endnote
%0 Report
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-A579-5
%Y SFB/TR 14 AVACS
%D 2008
%X In this paper we show that states, transitions and behavior of
concurrent systems can often be modeled as sheaves over a
suitable topological space (where the topology expresses how the
interacting systems share the information). This allows us to use
results from categorical logic (and in particular geometric
logic) to describe which type of properties are transferred, if
valid locally in all component systems, also at a global level,
to the system obtained by interconnecting the individual systems.
The main area of application is to modular verification of
complex systems.
We illustrate the ideas by means of an example involving
a family of interacting controllers for trains on a rail track.
%B AVACS Technical Report
%N 46
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_046.pdf
SOFIE: A Self-Organizing Framework for Information Extraction
F. Suchanek, M. Sozio and G. Weikum
Technical Report, 2008
F. Suchanek, M. Sozio and G. Weikum
Technical Report, 2008
Abstract
This paper presents SOFIE, a system for automated ontology extension.
SOFIE can parse natural language documents, extract ontological facts
from them and link the facts into an ontology. SOFIE uses logical
reasoning on the existing knowledge and on the new knowledge in order
to disambiguate words to their most probable meaning, to reason on the
meaning of text patterns and to take into account world knowledge
axioms. This allows SOFIE to check the plausibility of hypotheses and
to avoid inconsistencies with the ontology. The framework of SOFIE
unites the paradigms of pattern matching, word sense disambiguation
and ontological reasoning in one unified model. Our experiments show
that SOFIE delivers near-perfect output, even from unstructured
Internet documents.
Export
BibTeX
@techreport{SuchanekMauroWeikum2008,
TITLE = {{SOFIE}: A Self-Organizing Framework for Information Extraction},
AUTHOR = {Suchanek, Fabian and Sozio, Mauro and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-004},
NUMBER = {MPI-I-2008-5-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers near-perfect output, even from unstructured Internet documents.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Suchanek, Fabian
%A Sozio, Mauro
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T SOFIE: A Self-Organizing Framework for Information Extraction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-668E-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 49 p.
%X This paper presents SOFIE, a system for automated ontology extension.
SOFIE can parse natural language documents, extract ontological facts
from them and link the facts into an ontology. SOFIE uses logical
reasoning on the existing knowledge and on the new knowledge in order
to disambiguate words to their most probable meaning, to reason on the
meaning of text patterns and to take into account world knowledge
axioms. This allows SOFIE to check the plausibility of hypotheses and
to avoid inconsistencies with the ontology. The framework of SOFIE
unites the paradigms of pattern matching, word sense disambiguation
and ontological reasoning in one unified model. Our experiments show
that SOFIE delivers near-perfect output, even from unstructured
Internet documents.
%B Research Report / Max-Planck-Institut für Informatik
Shape Complexity from Image Similarity
D. Wang, A. Belyaev, W. Saleem and H.-P. Seidel
Technical Report, 2008
D. Wang, A. Belyaev, W. Saleem and H.-P. Seidel
Technical Report, 2008
Abstract
We present an approach to automatically compute the complexity of a
given 3D shape. Previous approaches have made use of geometric
and/or topological properties of the 3D shape to compute
complexity. Our approach is based on shape appearance and estimates
the complexity of a given 3D shape according to how 2D views of the
shape diverge from each other. We use similarity among views of the
3D shape as the basis for our complexity computation. Hence our
approach uses claims from psychology that humans mentally represent
3D shapes as organizations of 2D views and, therefore, mimics how
humans gauge shape complexity. Experimental results show that our
approach produces results that are more in agreement with the human
notion of shape complexity than those obtained using previous
approaches.
Export
BibTeX
@techreport{WangBelyaevSaleemSeidel2008,
TITLE = {Shape Complexity from Image Similarity},
AUTHOR = {Wang, Danyi and Belyaev, Alexander and Saleem, Waqar and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-002},
NUMBER = {MPI-I-2008-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {We present an approach to automatically compute the complexity of a given 3D shape. Previous approaches have made use of geometric and/or topological properties of the 3D shape to compute complexity. Our approach is based on shape appearance and estimates the complexity of a given 3D shape according to how 2D views of the shape diverge from each other. We use similarity among views of the 3D shape as the basis for our complexity computation. Hence our approach uses claims from psychology that humans mentally represent 3D shapes as organizations of 2D views and, therefore, mimics how humans gauge shape complexity. Experimental results show that our approach produces results that are more in agreement with the human notion of shape complexity than those obtained using previous approaches.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Wang, Danyi
%A Belyaev, Alexander
%A Saleem, Waqar
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Shape Complexity from Image Similarity :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B9-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 28 p.
%X We present an approach to automatically compute the complexity of a
given 3D shape. Previous approaches have made use of geometric
and/or topological properties of the 3D shape to compute
complexity. Our approach is based on shape appearance and estimates
the complexity of a given 3D shape according to how 2D views of the
shape diverge from each other. We use similarity among views of the
3D shape as the basis for our complexity computation. Hence our
approach uses claims from psychology that humans mentally represent
3D shapes as organizations of 2D views and, therefore, mimics how
humans gauge shape complexity. Experimental results show that our
approach produces results that are more in agreement with the human
notion of shape complexity than those obtained using previous
approaches.
%B Research Report / Max-Planck-Institut für Informatik
2007
A Lagrangian relaxation approach for the multiple sequence alignment problem
E. Althaus and S. Canzar
Technical Report, 2007
E. Althaus and S. Canzar
Technical Report, 2007
Abstract
We present a branch-and-bound (bb) algorithm for the multiple sequence
alignment
problem (MSA), one of the most important problems in computational
biology. The
upper bound at each bb node is based on a Lagrangian relaxation of an
integer linear programming formulation for MSA. Dualizing certain
inequalities, the Lagrangian subproblem becomes a pairwise alignment
problem, which
can be solved efficiently by a dynamic programming approach. Due to a
reformulation
w.r.t. additionally introduced variables prior to relaxation we improve
the convergence
rate dramatically while at the same time being able to solve the
Lagrangian problem efficiently.
Our experiments show that our implementation, although preliminary,
outperforms all exact
algorithms for the multiple sequence alignment problem.
Export
BibTeX
@techreport{,
TITLE = {A Lagrangian relaxation approach for the multiple sequence alignment problem},
AUTHOR = {Althaus, Ernst and Canzar, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002},
NUMBER = {MPI-I-2007-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Canzar, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Lagrangian relaxation approach for the multiple sequence alignment problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6707-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 41 p.
%X We present a branch-and-bound (bb) algorithm for the multiple sequence
alignment
problem (MSA), one of the most important problems in computational
biology. The
upper bound at each bb node is based on a Lagrangian relaxation of an
integer linear programming formulation for MSA. Dualizing certain
inequalities, the Lagrangian subproblem becomes a pairwise alignment
problem, which
can be solved efficiently by a dynamic programming approach. Due to a
reformulation
w.r.t. additionally introduced variables prior to relaxation we improve
the convergence
rate dramatically while at the same time being able to solve the
Lagrangian problem efficiently.
Our experiments show that our implementation, although preliminary,
outperforms all exact
algorithms for the multiple sequence alignment problem.
%B Research Report / Max-Planck-Institut für Informatik
A nonlinear viseme model for triphone-based speech synthesis
R. Bargmann, V. Blanz and H.-P. Seidel
Technical Report, 2007
R. Bargmann, V. Blanz and H.-P. Seidel
Technical Report, 2007
Abstract
This paper presents a representation of visemes that defines a measure
of similarity between different visemes, and a system of viseme
categories. The representation is derived from a statistical data
analysis of feature points on 3D scans, using Locally Linear
Embedding (LLE). The similarity measure determines which available
viseme and triphones to use to synthesize 3D face animation for a
novel audio file. From a corpus of dynamic recorded 3D mouth
articulation data, our system is able to find the best suited sequence
of triphones over which to interpolate while reusing the
coarticulation information to obtain correct mouth movements over
time. Due to the similarity measure, the system can deal with
relatively small triphone databases and find the most appropriate
candidates. With the selected sequence of database triphones, we can
finally morph along the successive triphones to produce the final
articulation animation.
In an entirely data-driven approach, our automated procedure for
defining viseme categories reproduces the groups of related visemes
that are defined in the phonetics literature.
Export
BibTeX
@techreport{BargmannBlanzSeidel2007,
TITLE = {A nonlinear viseme model for triphone-based speech synthesis},
AUTHOR = {Bargmann, Robert and Blanz, Volker and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-003},
NUMBER = {MPI-I-2007-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {This paper presents a representation of visemes that defines a measure of similarity between different visemes, and a system of viseme categories. The representation is derived from a statistical data analysis of feature points on 3D scans, using Locally Linear Embedding (LLE). The similarity measure determines which available viseme and triphones to use to synthesize 3D face animation for a novel audio file. From a corpus of dynamic recorded 3D mouth articulation data, our system is able to find the best suited sequence of triphones over which to interpolate while reusing the coarticulation information to obtain correct mouth movements over time. Due to the similarity measure, the system can deal with relatively small triphone databases and find the most appropriate candidates. With the selected sequence of database triphones, we can finally morph along the successive triphones to produce the final articulation animation. In an entirely data-driven approach, our automated procedure for defining viseme categories reproduces the groups of related visemes that are defined in the phonetics literature.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bargmann, Robert
%A Blanz, Volker
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A nonlinear viseme model for triphone-based speech synthesis :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66DC-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 28 p.
%X This paper presents a representation of visemes that defines a measure
of similarity between different visemes, and a system of viseme
categories. The representation is derived from a statistical data
analysis of feature points on 3D scans, using Locally Linear
Embedding (LLE). The similarity measure determines which available
viseme and triphones to use to synthesize 3D face animation for a
novel audio file. From a corpus of dynamic recorded 3D mouth
articulation data, our system is able to find the best suited sequence
of triphones over which to interpolate while reusing the
coarticulation information to obtain correct mouth movements over
time. Due to the similarity measure, the system can deal with
relatively small triphone databases and find the most appropriate
candidates. With the selected sequence of database triphones, we can
finally morph along the successive triphones to produce the final
articulation animation.
In an entirely data-driven approach, our automated procedure for
defining viseme categories reproduces the groups of related visemes
that are defined in the phonetics literature.
%B Research Report / Max-Planck-Institut für Informatik
Computing Envelopes of Quadrics
E. Berberich and M. Meyerovitch
Technical Report, 2007
E. Berberich and M. Meyerovitch
Technical Report, 2007
Export
BibTeX
@techreport{acs:bm-ceq-07,
TITLE = {Computing Envelopes of Quadrics},
AUTHOR = {Berberich, Eric and Meyerovitch, Michal},
LANGUAGE = {eng},
NUMBER = {ACS-TR-241402-03},
LOCALID = {Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Meyerovitch, Michal
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Computing Envelopes of Quadrics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1EA4-F
%F EDOC: 356718
%F OTHER: Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 5 p.
%B ACS Technical Reports
Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point
E. Berberich and L. Kettner
Technical Report, 2007
E. Berberich and L. Kettner
Technical Report, 2007
Export
BibTeX
@techreport{bk-reorder-07,
TITLE = {Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point},
AUTHOR = {Berberich, Eric and Kettner, Lutz},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2007-1-001},
LOCALID = {Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Eric
%A Kettner, Lutz
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1FB9-8
%F EDOC: 356668
%@ 0946-011X
%F OTHER: Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 20 p.
%B Research Report
Revision of interface specification of algebraic kernel
E. Berberich, M. Hemmer, M. I. Karavelas and M. Teillaud
Technical Report, 2007
E. Berberich, M. Hemmer, M. I. Karavelas and M. Teillaud
Technical Report, 2007
Export
BibTeX
@techreport{acs:bhkt-risak-06,
TITLE = {Revision of interface specification of algebraic kernel},
AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos I. and Teillaud, Monique},
LANGUAGE = {eng},
LOCALID = {Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%A Karavelas, Menelaos I.
%A Teillaud, Monique
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Revision of interface specification of algebraic kernel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-208F-0
%F EDOC: 356661
%F OTHER: Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06
%F OTHER: ACS-TR-243301-01
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 100 p.
%B ACS Technical Reports
Sweeping and maintaining two-dimensional arrangements on quadrics
E. Berberich, E. Fogel, D. Halperin, K. Mehlhorn and R. Wein
Technical Report, 2007
E. Berberich, E. Fogel, D. Halperin, K. Mehlhorn and R. Wein
Technical Report, 2007
Export
BibTeX
@techreport{acs:bfhmw-smtaoq-07,
TITLE = {Sweeping and maintaining two-dimensional arrangements on quadrics},
AUTHOR = {Berberich, Eric and Fogel, Efi and Halperin, Dan and Mehlhorn, Kurt and Wein, Ron},
LANGUAGE = {eng},
NUMBER = {ACS-TR-241402-02},
LOCALID = {Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Fogel, Efi
%A Halperin, Dan
%A Mehlhorn, Kurt
%A Wein, Ron
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sweeping and maintaining two-dimensional arrangements on quadrics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-20E3-1
%F EDOC: 356692
%F OTHER: Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 10 p.
%B ACS Technical Reports
Definition of the 3D Quadrical Kernel Content
E. Berberich and M. Hemmer
Technical Report, 2007
E. Berberich and M. Hemmer
Technical Report, 2007
Export
BibTeX
@techreport{acs:bh-dtqkc-07,
TITLE = {Definition of the {3D} Quadrical Kernel Content},
AUTHOR = {Berberich, Eric and Hemmer, Michael},
LANGUAGE = {eng},
NUMBER = {ACS-TR-243302-02},
LOCALID = {Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Definition of the 3D Quadrical Kernel Content :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1ED4-1
%F EDOC: 356735
%F OTHER: Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 25 p.
%B ACS Technical Reports
Exact Computation of Arrangements of Rotated Conics
E. Berberich, M. Caroli and N. Wolpert
Technical Report, 2007
E. Berberich, M. Caroli and N. Wolpert
Technical Report, 2007
Export
BibTeX
@techreport{acs:bcw-carc-07,
TITLE = {Exact Computation of Arrangements of Rotated Conics},
AUTHOR = {Berberich, Eric and Caroli, Manuel and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ACS-TR-123104-03},
LOCALID = {Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Caroli, Manuel
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Exact Computation of Arrangements of Rotated Conics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1F20-F
%F EDOC: 356666
%F OTHER: Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 5 p
%B ACS Technical Reports
Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves
E. Berberich, E. Fogel and A. Meyer
Technical Report, 2007
E. Berberich, E. Fogel and A. Meyer
Technical Report, 2007
Export
BibTeX
@techreport{acs:bfm-uwibaqpac-07,
TITLE = {Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves},
AUTHOR = {Berberich, Eric and Fogel, Efi and Meyer, Andreas},
LANGUAGE = {eng},
NUMBER = {ACS-TR-243305-01},
LOCALID = {Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Fogel, Efi
%A Meyer, Andreas
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2128-E
%F EDOC: 356664
%F OTHER: Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 5 p.
%B ACS Technical Reports
A Time Machine for Text Search
K. Berberich, S. Bedathur, T. Neumann and G. Weikum
Technical Report, 2007
K. Berberich, S. Bedathur, T. Neumann and G. Weikum
Technical Report, 2007
Export
BibTeX
@techreport{TechReportBBNW-2007,
TITLE = {A Time Machine for Text Search},
AUTHOR = {Berberich, Klaus and Bedathur, Srikanta and Neumann, Thomas and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPII-I-2007-5-02},
LOCALID = {Local-ID: C12573CC004A8E26-D444201EBAA5F95BC125731E00458A41-TechReportBBNW-2007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Klaus
%A Bedathur, Srikanta
%A Neumann, Thomas
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Time Machine for Text Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1E49-E
%F EDOC: 356443
%@ 0946-011X
%F OTHER: Local-ID: C12573CC004A8E26-D444201EBAA5F95BC125731E00458A41-TechReportBBNW-2007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 39 p.
%B Research Report
HistoPyramids in Iso-Surface Extraction
C. Dyken, G. Ziegler, C. Theobalt and H.-P. Seidel
Technical Report, 2007
C. Dyken, G. Ziegler, C. Theobalt and H.-P. Seidel
Technical Report, 2007
Abstract
We present an implementation approach to high-speed Marching
Cubes, running entirely on the Graphics Processing Unit of Shader Model
3.0 and 4.0 graphics hardware. Our approach is based on the interpretation
of Marching Cubes as a stream compaction and expansion process, and is
implemented using the HistoPyramid, a hierarchical data structure
previously only used in GPU data compaction. We extend the HistoPyramid
structure to allow for stream expansion, which provides an efficient
method for generating geometry directly on the GPU, even on Shader Model
3.0 hardware. Currently, our algorithm outperforms all other known
GPU-based iso-surface extraction algorithms. We describe our
implementation and present a performance analysis on several generations
of graphics hardware.
Export
BibTeX
@techreport{DykenZieglerTheobaltSeidel2007,
TITLE = {Histo{P}yramids in Iso-Surface Extraction},
AUTHOR = {Dyken, Christopher and Ziegler, Gernot and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-006},
NUMBER = {MPI-I-2007-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present an implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0 and 4.0 graphics hardware. Our approach is based on the interpretation of Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data structure previously only used in GPU data compaction. We extend the HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the GPU, even on Shader Model 3.0 hardware. Currently, our algorithm outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance analysis on several generations of graphics hardware.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dyken, Christopher
%A Ziegler, Gernot
%A Theobalt, Christian
%A Seidel, Hans-Peter
%+ External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T HistoPyramids in Iso-Surface Extraction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66D3-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 16 p.
%X We present an implementation approach to high-speed Marching
Cubes, running entirely on the Graphics Processing Unit of Shader Model
3.0 and 4.0 graphics hardware. Our approach is based on the interpretation
of Marching Cubes as a stream compaction and expansion process, and is
implemented using the HistoPyramid, a hierarchical data structure
previously only used in GPU data compaction. We extend the HistoPyramid
structure to allow for stream expansion, which provides an efficient
method for generating geometry directly on the GPU, even on Shader Model
3.0 hardware. Currently, our algorithm outperforms all other known
GPU-based iso-surface extraction algorithms. We describe our
implementation and present a performance analysis on several generations
of graphics hardware.
%B Research Report / Max-Planck-Institut für Informatik
Snap Rounding of Bézier Curves
A. Eigenwillig, L. Kettner and N. Wolpert
Technical Report, 2007
A. Eigenwillig, L. Kettner and N. Wolpert
Technical Report, 2007
Export
BibTeX
@techreport{ACS-TR-121108-01,
TITLE = {Snap Rounding of B{\'e}zier Curves},
AUTHOR = {Eigenwillig, Arno and Kettner, Lutz and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {MPI-I-2006-1-005},
LOCALID = {Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Eigenwillig, Arno
%A Kettner, Lutz
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Snap Rounding of Bézier Curves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-20B9-0
%F EDOC: 356760
%F OTHER: Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01
%F OTHER: ACS-TR-121108-01
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 19 p.
%B Research Report
Global stochastic optimization for robust and accurate human motion capture
J. Gall, T. Brox, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
J. Gall, T. Brox, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Abstract
Tracking of human motion in video is usually tackled either
by local optimization or filtering approaches. While
local optimization offers accurate estimates but often looses
track due to local optima, particle filtering can recover from
errors at the expense of a poor accuracy due to overestimation
of noise. In this paper, we propose to embed global
stochastic optimization in a tracking framework. This new
optimization technique exhibits both the robustness of filtering
strategies and a remarkable accuracy. We apply the
optimization to an energy function that relies on silhouettes
and color, as well as some prior information on physical
constraints. This framework provides a general solution to
markerless human motion capture since neither excessive
preprocessing nor strong assumptions except of a 3D model
are required. The optimization provides initialization and
accurate tracking even in case of low contrast and challenging
illumination. Our experimental evaluation demonstrates
the large improvements obtained with this technique.
It comprises a quantitative error analysis comparing the
approach with local optimization, particle filtering, and a
heuristic based on particle filtering.
Export
BibTeX
@techreport{GallBroxRosenhahnSeidel2008,
TITLE = {Global stochastic optimization for robust and accurate human motion capture},
AUTHOR = {Gall, J{\"u}rgen and Brox, Thomas and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-008},
NUMBER = {MPI-I-2007-4-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {Tracking of human motion in video is usually tackled either by local optimization or filtering approaches. While local optimization offers accurate estimates but often looses track due to local optima, particle filtering can recover from errors at the expense of a poor accuracy due to overestimation of noise. In this paper, we propose to embed global stochastic optimization in a tracking framework. This new optimization technique exhibits both the robustness of filtering strategies and a remarkable accuracy. We apply the optimization to an energy function that relies on silhouettes and color, as well as some prior information on physical constraints. This framework provides a general solution to markerless human motion capture since neither excessive preprocessing nor strong assumptions except of a 3D model are required. The optimization provides initialization and accurate tracking even in case of low contrast and challenging illumination. Our experimental evaluation demonstrates the large improvements obtained with this technique. It comprises a quantitative error analysis comparing the approach with local optimization, particle filtering, and a heuristic based on particle filtering.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gall, Jürgen
%A Brox, Thomas
%A Rosenhahn, Bodo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Global stochastic optimization for robust and accurate human motion capture :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66CE-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 28 p.
%X Tracking of human motion in video is usually tackled either
by local optimization or filtering approaches. While
local optimization offers accurate estimates but often looses
track due to local optima, particle filtering can recover from
errors at the expense of a poor accuracy due to overestimation
of noise. In this paper, we propose to embed global
stochastic optimization in a tracking framework. This new
optimization technique exhibits both the robustness of filtering
strategies and a remarkable accuracy. We apply the
optimization to an energy function that relies on silhouettes
and color, as well as some prior information on physical
constraints. This framework provides a general solution to
markerless human motion capture since neither excessive
preprocessing nor strong assumptions except of a 3D model
are required. The optimization provides initialization and
accurate tracking even in case of low contrast and challenging
illumination. Our experimental evaluation demonstrates
the large improvements obtained with this technique.
It comprises a quantitative error analysis comparing the
approach with local optimization, particle filtering, and a
heuristic based on particle filtering.
%B Research Report / Max-Planck-Institut für Informatik
Clustered stochastic optimization for object recognition and pose estimation
J. Gall, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
J. Gall, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Abstract
We present an approach for estimating the 3D position and in case of
articulated objects also the joint configuration from segmented 2D
images. The pose estimation without initial information is a challenging
optimization problem in a high dimensional space and is essential for
texture acquisition and initialization of model-based tracking
algorithms. Our method is able to recognize the correct object in the
case of multiple objects and estimates its pose with a high accuracy.
The key component is a particle-based global optimization method that
converges to the global minimum similar to simulated annealing. After
detecting potential bounded subsets of the search space, the particles
are divided into clusters and migrate to the most attractive cluster as
the time increases. The performance of our approach is verified by means
of real scenes and a quantative error analysis for image distortions.
Our experiments include rigid bodies and full human bodies.
Export
BibTeX
@techreport{GallRosenhahnSeidel2007,
TITLE = {Clustered stochastic optimization for object recognition and pose estimation},
AUTHOR = {Gall, J{\"u}rgen and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-001},
NUMBER = {MPI-I-2007-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present an approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D images. The pose estimation without initial information is a challenging optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based tracking algorithms. Our method is able to recognize the correct object in the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global optimization method that converges to the global minimum similar to simulated annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and migrate to the most attractive cluster as the time increases. The performance of our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our experiments include rigid bodies and full human bodies.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gall, Jürgen
%A Rosenhahn, Bodo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Clustered stochastic optimization for object recognition and pose estimation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66E5-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 23 p.
%X We present an approach for estimating the 3D position and in case of
articulated objects also the joint configuration from segmented 2D
images. The pose estimation without initial information is a challenging
optimization problem in a high dimensional space and is essential for
texture acquisition and initialization of model-based tracking
algorithms. Our method is able to recognize the correct object in the
case of multiple objects and estimates its pose with a high accuracy.
The key component is a particle-based global optimization method that
converges to the global minimum similar to simulated annealing. After
detecting potential bounded subsets of the search space, the particles
are divided into clusters and migrate to the most attractive cluster as
the time increases. The performance of our approach is verified by means
of real scenes and a quantative error analysis for image distortions.
Our experiments include rigid bodies and full human bodies.
%B Research Report / Max-Planck-Institut für Informatik
Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications
J. Gall, J. Potthoff, C. Schnörr, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
J. Gall, J. Potthoff, C. Schnörr, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Abstract
Interacting and annealing are two powerful strategies that are applied
in different areas of stochastic modelling and data analysis.
Interacting particle systems approximate a distribution of interest by a
finite number of particles where the particles interact between the time
steps. In computer vision, they are commonly known as particle filters.
Simulated annealing, on the other hand, is a global optimization method
derived from statistical mechanics. A recent heuristic approach to fuse
these two techniques for motion capturing has become known as annealed
particle filter. In order to analyze these techniques, we rigorously
derive in this paper two algorithms with annealing properties based on
the mathematical theory of interacting particle systems. Convergence
results and sufficient parameter restrictions enable us to point out
limitations of the annealed particle filter. Moreover, we evaluate the
impact of the parameters on the performance in various experiments,
including the tracking of articulated bodies from noisy measurements.
Our results provide a general guidance on suitable parameter choices for
different applications.
Export
BibTeX
@techreport{GallPotthoffRosenhahnSchnoerrSeidel2006,
TITLE = {Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications},
AUTHOR = {Gall, J{\"u}rgen and Potthoff, J{\"u}rgen and Schn{\"o}rr, Christoph and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2006-4-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Gall, Jürgen
%A Potthoff, Jürgen
%A Schnörr, Christoph
%A Rosenhahn, Bodo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-13C7-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%Z Review method: peer-reviewed
%X Interacting and annealing are two powerful strategies that are applied
in different areas of stochastic modelling and data analysis.
Interacting particle systems approximate a distribution of interest by a
finite number of particles where the particles interact between the time
steps. In computer vision, they are commonly known as particle filters.
Simulated annealing, on the other hand, is a global optimization method
derived from statistical mechanics. A recent heuristic approach to fuse
these two techniques for motion capturing has become known as annealed
particle filter. In order to analyze these techniques, we rigorously
derive in this paper two algorithms with annealing properties based on
the mathematical theory of interacting particle systems. Convergence
results and sufficient parameter restrictions enable us to point out
limitations of the annealed particle filter. Moreover, we evaluate the
impact of the parameters on the performance in various experiments,
including the tracking of articulated bodies from noisy measurements.
Our results provide a general guidance on suitable parameter choices for
different applications.
%B Research Report
LFthreads: a lock-free thread library
A. Gidenstam and M. Papatriantafilou
Technical Report, 2007
A. Gidenstam and M. Papatriantafilou
Technical Report, 2007
Abstract
This paper presents the synchronization in LFthreads, a thread library
entirely based on lock-free methods, i.e. no
spin-locks or similar synchronization mechanisms are employed in the
implementation of the multithreading.
Since lock-freedom is highly desirable in multiprocessors/multicores
due to its advantages in parallelism, fault-tolerance,
convoy-avoidance and more, there is an increased demand in lock-free
methods in parallel applications, hence also in multiprocessor/multicore
system services. This is why a lock-free
multithreading library is important. To the best of our knowledge
LFthreads is the first thread library that provides a lock-free
implementation
of blocking synchronization primitives for application threads.
Lock-free implementation of objects with blocking semantics may sound like
a contradicting goal. However, such objects have benefits:
e.g. library operations that block and unblock threads on the same
synchronization object can make progress in parallel while maintaining
the desired thread-level semantics
and without having to wait for any ``slow'' operations among them.
Besides, as no spin-locks or similar synchronization mechanisms are employed,
processors are always able to do useful work. As a consequence,
applications, too, can enjoy enhanced parallelism and fault-tolerance.
The synchronization in LFthreads is achieved by a new method, which
we call responsibility hand-off (RHO), that does not need any
special kernel support.
Export
BibTeX
@techreport{,
TITLE = {{LFthreads}: a lock-free thread library},
AUTHOR = {Gidenstam, Anders and Papatriantafilou, Marina},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003},
NUMBER = {MPI-I-2007-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gidenstam, Anders
%A Papatriantafilou, Marina
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T LFthreads: a lock-free thread library :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66F8-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 36 p.
%X This paper presents the synchronization in LFthreads, a thread library
entirely based on lock-free methods, i.e. no
spin-locks or similar synchronization mechanisms are employed in the
implementation of the multithreading.
Since lock-freedom is highly desirable in multiprocessors/multicores
due to its advantages in parallelism, fault-tolerance,
convoy-avoidance and more, there is an increased demand in lock-free
methods in parallel applications, hence also in multiprocessor/multicore
system services. This is why a lock-free
multithreading library is important. To the best of our knowledge
LFthreads is the first thread library that provides a lock-free
implementation
of blocking synchronization primitives for application threads.
Lock-free implementation of objects with blocking semantics may sound like
a contradicting goal. However, such objects have benefits:
e.g. library operations that block and unblock threads on the same
synchronization object can make progress in parallel while maintaining
the desired thread-level semantics
and without having to wait for any ``slow'' operations among them.
Besides, as no spin-locks or similar synchronization mechanisms are employed,
processors are always able to do useful work. As a consequence,
applications, too, can enjoy enhanced parallelism and fault-tolerance.
The synchronization in LFthreads is achieved by a new method, which
we call responsibility hand-off (RHO), that does not need any
special kernel support.
%B Research Report / Max-Planck-Institut für Informatik
Global Illumination using Photon Ray Splatting
R. Herzog, V. Havran, S. Kinuwaki, K. Myszkowski and H.-P. Seidel
Technical Report, 2007
R. Herzog, V. Havran, S. Kinuwaki, K. Myszkowski and H.-P. Seidel
Technical Report, 2007
Abstract
We present a novel framework for efficiently computing the indirect
illumination in diffuse and moderately glossy scenes using density estimation
techniques.
A vast majority of existing global illumination approaches either quickly
computes an approximate solution, which may not be adequate for previews, or
performs a much more time-consuming computation to obtain high-quality results
for the indirect illumination. Our method improves photon density estimation,
which is an approximate solution, and leads to significantly better visual
quality in particular for complex geometry, while only slightly increasing the
computation time. We perform direct splatting of photon rays, which allows us
to use simpler search data structures. Our novel lighting computation is
derived from basic radiometric theory and requires only small changes to
existing photon splatting approaches.
Since our density estimation is carried out in ray space rather than on
surfaces, as in the commonly used photon mapping algorithm, the results are
more robust against geometrically incurred sources of bias. This holds also in
combination with final gathering where photon mapping often overestimates the
illumination near concave geometric features. In addition, we show that our
splatting technique can be extended to handle moderately glossy surfaces and
can be combined with traditional irradiance caching for sparse sampling and
filtering in image space.
Export
BibTeX
@techreport{HerzogReport2007,
TITLE = {Global Illumination using Photon Ray Splatting},
AUTHOR = {Herzog, Robert and Havran, Vlastimil and Kinuwaki, Shinichi and Myszkowski, Karol and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2007-4-007},
LOCALID = {Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Herzog, Robert
%A Havran, Vlastimil
%A Kinuwaki, Shinichi
%A Myszkowski, Karol
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Global Illumination using Photon Ray Splatting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1F57-6
%F EDOC: 356502
%F OTHER: Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 66 p.
%X We present a novel framework for efficiently computing the indirect
illumination in diffuse and moderately glossy scenes using density estimation
techniques.
A vast majority of existing global illumination approaches either quickly
computes an approximate solution, which may not be adequate for previews, or
performs a much more time-consuming computation to obtain high-quality results
for the indirect illumination. Our method improves photon density estimation,
which is an approximate solution, and leads to significantly better visual
quality in particular for complex geometry, while only slightly increasing the
computation time. We perform direct splatting of photon rays, which allows us
to use simpler search data structures. Our novel lighting computation is
derived from basic radiometric theory and requires only small changes to
existing photon splatting approaches.
Since our density estimation is carried out in ray space rather than on
surfaces, as in the commonly used photon mapping algorithm, the results are
more robust against geometrically incurred sources of bias. This holds also in
combination with final gathering where photon mapping often overestimates the
illumination near concave geometric features. In addition, we show that our
splatting technique can be extended to handle moderately glossy surfaces and
can be combined with traditional irradiance caching for sparse sampling and
filtering in image space.
%B Research Report
Superposition for Finite Domains
T. Hillenbrand and C. Weidenbach
Technical Report, 2007
T. Hillenbrand and C. Weidenbach
Technical Report, 2007
Export
BibTeX
@techreport{HillenbrandWeidenbach2007,
TITLE = {Superposition for Finite Domains},
AUTHOR = {Hillenbrand, Thomas and Weidenbach, Christoph},
LANGUAGE = {eng},
NUMBER = {MPI-I-2007-RG1-002},
LOCALID = {Local-ID: C12573CC004A8E26-1CF84BA6556F8748C12572C1002F229B-HillenbrandWeidenbach2007Report},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
TYPE = {Max-Planck-Institut für Informatik / Research Report},
}
Endnote
%0 Report
%A Hillenbrand, Thomas
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Superposition for Finite Domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-20DA-8
%F EDOC: 356455
%F OTHER: Local-ID: C12573CC004A8E26-1CF84BA6556F8748C12572C1002F229B-HillenbrandWeidenbach2007Report
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 25 p.
%B Max-Planck-Institut für Informatik / Research Report
Efficient Surface Reconstruction for Piecewise Smooth Objects
P. Jenke, M. Wand and W. Strasser
Technical Report, 2007
P. Jenke, M. Wand and W. Strasser
Technical Report, 2007
Export
BibTeX
@techreport{Jenke2007,
TITLE = {Efficient Surface Reconstruction for Piecewise Smooth Objects},
AUTHOR = {Jenke, Philipp and Wand, Michael and Strasser, Wolfgang},
LANGUAGE = {eng},
ISSN = {0946-3852},
URL = {urn:nbn:de:bsz:21-opus-32001},
NUMBER = {WSI-2007-05},
INSTITUTION = {Wilhelm-Schickard-Institut / Universit{\"a}t T{\"u}bingen},
ADDRESS = {T{\"u}bingen},
YEAR = {2007},
DATE = {2007},
TYPE = {WSI},
VOLUME = {2007-05},
}
Endnote
%0 Report
%A Jenke, Philipp
%A Wand, Michael
%A Strasser, Wolfgang
%+ External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
%T Efficient Surface Reconstruction for Piecewise Smooth Objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0023-D3F7-A
%U urn:nbn:de:bsz:21-opus-32001
%Y Wilhelm-Schickard-Institut / Universität Tübingen
%C Tübingen
%D 2007
%P 17 p.
%B WSI
%N 2007-05
%@ false
%U http://nbn-resolving.de/urn:nbn:de:bsz:21-opus-32001
NAGA: Searching and Ranking Knowledge
G. Kasneci, F. M. Suchanek, G. Ifrim, M. Ramanath and G. Weikum
Technical Report, 2007
G. Kasneci, F. M. Suchanek, G. Ifrim, M. Ramanath and G. Weikum
Technical Report, 2007
Abstract
The Web has the potential to become the world's largest knowledge base.
In order to unleash this potential, the wealth of information available on the
web needs to be extracted and organized. There is a need for new querying
techniques that are simple yet more expressive than those provided by standard
keyword-based search engines. Search for knowledge rather than Web pages needs
to consider inherent semantic structures like entities (person, organization,
etc.) and relationships (isA,locatedIn, etc.).
In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s
knowledge base, which is organized as a graph with typed edges, consists of
millions of entities and relationships automatically extracted fromWeb-based
corpora. A query language capable of expressing keyword search for the casual
user as well as graph queries with regular expressions for the expert, enables
the formulation of queries with additional semantic information. We introduce a
novel scoring model, based on the principles of generative language models,
which formalizes several notions like confidence, informativeness and
compactness and uses them to rank query results. We demonstrate {NAGA}'s
superior result quality over current search engines by conducting a
comprehensive evaluation, including user assessments, for advanced queries.
Export
BibTeX
@techreport{TechReportKSIRW-2007,
TITLE = {{NAGA}: Searching and Ranking Knowledge},
AUTHOR = {Kasneci, Gjergji and Suchanek, Fabian M. and Ifrim, Georgiana and Ramanath, Maya and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2007-5-001},
LOCALID = {Local-ID: C12573CC004A8E26-0C33A6E805909705C12572AE003DA15B-TechReportKSIRW-2007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {The Web has the potential to become the world's largest knowledge base. In order to unleash this potential, the wealth of information available on the web needs to be extracted and organized. There is a need for new querying techniques that are simple yet more expressive than those provided by standard keyword-based search engines. Search for knowledge rather than Web pages needs to consider inherent semantic structures like entities (person, organization, etc.) and relationships (isA,locatedIn, etc.). In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s knowledge base, which is organized as a graph with typed edges, consists of millions of entities and relationships automatically extracted fromWeb-based corpora. A query language capable of expressing keyword search for the casual user as well as graph queries with regular expressions for the expert, enables the formulation of queries with additional semantic information. We introduce a novel scoring model, based on the principles of generative language models, which formalizes several notions like confidence, informativeness and compactness and uses them to rank query results. We demonstrate {NAGA}'s superior result quality over current search engines by conducting a comprehensive evaluation, including user assessments, for advanced queries.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Suchanek, Fabian M.
%A Ifrim, Georgiana
%A Ramanath, Maya
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T NAGA: Searching and Ranking Knowledge :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1FFC-1
%F EDOC: 356470
%@ 0946-011X
%F OTHER: Local-ID: C12573CC004A8E26-0C33A6E805909705C12572AE003DA15B-TechReportKSIRW-2007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 42 p.
%X The Web has the potential to become the world's largest knowledge base.
In order to unleash this potential, the wealth of information available on the
web needs to be extracted and organized. There is a need for new querying
techniques that are simple yet more expressive than those provided by standard
keyword-based search engines. Search for knowledge rather than Web pages needs
to consider inherent semantic structures like entities (person, organization,
etc.) and relationships (isA,locatedIn, etc.).
In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s
knowledge base, which is organized as a graph with typed edges, consists of
millions of entities and relationships automatically extracted fromWeb-based
corpora. A query language capable of expressing keyword search for the casual
user as well as graph queries with regular expressions for the expert, enables
the formulation of queries with additional semantic information. We introduce a
novel scoring model, based on the principles of generative language models,
which formalizes several notions like confidence, informativeness and
compactness and uses them to rank query results. We demonstrate {NAGA}'s
superior result quality over current search engines by conducting a
comprehensive evaluation, including user assessments, for advanced queries.
%B Research Report
Construction of smooth maps with mean value coordinates
T. Langer and H.-P. Seidel
Technical Report, 2007
T. Langer and H.-P. Seidel
Technical Report, 2007
Abstract
Bernstein polynomials are a classical tool in Computer Aided Design to
create smooth maps
with a high degree of local control.
They are used for the construction of B\'ezier surfaces, free-form
deformations, and many other applications.
However, classical Bernstein polynomials are only defined for simplices
and parallelepipeds.
These can in general not directly capture the shape of arbitrary
objects. Instead,
a tessellation of the desired domain has to be done first.
We construct smooth maps on arbitrary sets of polytopes
such that the restriction to each of the polytopes is a Bernstein
polynomial in mean value coordinates
(or any other generalized barycentric coordinates).
In particular, we show how smooth transitions between different
domain polytopes can be ensured.
Export
BibTeX
@techreport{LangerSeidel2007,
TITLE = {Construction of smooth maps with mean value coordinates},
AUTHOR = {Langer, Torsten and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-002},
NUMBER = {MPI-I-2007-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {Bernstein polynomials are a classical tool in Computer Aided Design to create smooth maps with a high degree of local control. They are used for the construction of B\'ezier surfaces, free-form deformations, and many other applications. However, classical Bernstein polynomials are only defined for simplices and parallelepipeds. These can in general not directly capture the shape of arbitrary objects. Instead, a tessellation of the desired domain has to be done first. We construct smooth maps on arbitrary sets of polytopes such that the restriction to each of the polytopes is a Bernstein polynomial in mean value coordinates (or any other generalized barycentric coordinates). In particular, we show how smooth transitions between different domain polytopes can be ensured.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Langer, Torsten
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Construction of smooth maps with mean value coordinates :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66DF-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 22 p.
%X Bernstein polynomials are a classical tool in Computer Aided Design to
create smooth maps
with a high degree of local control.
They are used for the construction of B\'ezier surfaces, free-form
deformations, and many other applications.
However, classical Bernstein polynomials are only defined for simplices
and parallelepipeds.
These can in general not directly capture the shape of arbitrary
objects. Instead,
a tessellation of the desired domain has to be done first.
We construct smooth maps on arbitrary sets of polytopes
such that the restriction to each of the polytopes is a Bernstein
polynomial in mean value coordinates
(or any other generalized barycentric coordinates).
In particular, we show how smooth transitions between different
domain polytopes can be ensured.
%B Research Report / Max-Planck-Institut für Informatik
A volumetric approach to interactive shape editing
C. Stoll, E. de Aguiar, C. Theobalt and H.-P. Seidel
Technical Report, 2007
C. Stoll, E. de Aguiar, C. Theobalt and H.-P. Seidel
Technical Report, 2007
Abstract
We present a novel approach to real-time shape editing that produces
physically plausible deformations using an efficient and
easy-to-implement volumetric approach. Our algorithm alternates between
a linear tetrahedral Laplacian deformation step and a differential
update in which rotational transformations are approximated. By means of
this iterative process we can achieve non-linear deformation results
while having to solve only linear equation systems. The differential
update step relies on estimating the rotational component of the
deformation relative to the rest pose. This makes the method very stable
as the shape can be reverted to its rest pose even after extreme
deformations. Only a few point handles or area handles imposing
an orientation are needed to achieve high quality deformations, which
makes the approach intuitive to use. We show that our technique is well
suited for interactive shape manipulation and also provides an elegant
way to animate models with captured motion data.
Export
BibTeX
@techreport{Stoll2007,
TITLE = {A volumetric approach to interactive shape editing},
AUTHOR = {Stoll, Carsten and de Aguiar, Edilson and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-004},
NUMBER = {MPI-I-2007-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present a novel approach to real-time shape editing that produces physically plausible deformations using an efficient and easy-to-implement volumetric approach. Our algorithm alternates between a linear tetrahedral Laplacian deformation step and a differential update in which rotational transformations are approximated. By means of this iterative process we can achieve non-linear deformation results while having to solve only linear equation systems. The differential update step relies on estimating the rotational component of the deformation relative to the rest pose. This makes the method very stable as the shape can be reverted to its rest pose even after extreme deformations. Only a few point handles or area handles imposing an orientation are needed to achieve high quality deformations, which makes the approach intuitive to use. We show that our technique is well suited for interactive shape manipulation and also provides an elegant way to animate models with captured motion data.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Stoll, Carsten
%A de Aguiar, Edilson
%A Theobalt, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A volumetric approach to interactive shape editing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66D6-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 28 p.
%X We present a novel approach to real-time shape editing that produces
physically plausible deformations using an efficient and
easy-to-implement volumetric approach. Our algorithm alternates between
a linear tetrahedral Laplacian deformation step and a differential
update in which rotational transformations are approximated. By means of
this iterative process we can achieve non-linear deformation results
while having to solve only linear equation systems. The differential
update step relies on estimating the rotational component of the
deformation relative to the rest pose. This makes the method very stable
as the shape can be reverted to its rest pose even after extreme
deformations. Only a few point handles or area handles imposing
an orientation are needed to achieve high quality deformations, which
makes the approach intuitive to use. We show that our technique is well
suited for interactive shape manipulation and also provides an elegant
way to animate models with captured motion data.
%B Research Report / Max-Planck-Institut für Informatik
Yago: a large ontology from Wikipedia and WordNet
F. Suchanek, G. Kasneci and G. Weikum
Technical Report, 2007
F. Suchanek, G. Kasneci and G. Weikum
Technical Report, 2007
Abstract
This article presents YAGO, a large ontology with high coverage and precision.
YAGO has been automatically derived from Wikipedia and WordNet. It
comprises entities and relations, and currently contains more than 1.7
million
entities and 15 million facts. These include the taxonomic Is-A
hierarchy as well as semantic relations between entities. The facts
for YAGO have been extracted from the category system and the
infoboxes of Wikipedia and have been combined with taxonomic relations
from WordNet. Type checking techniques help us keep YAGO's precision
at 95% -- as proven by an extensive evaluation study. YAGO is based on
a clean logical model with a decidable consistency.
Furthermore, it allows representing n-ary relations in a natural way
while maintaining compatibility with RDFS. A powerful query model
facilitates access to YAGO's data.
Export
BibTeX
@techreport{,
TITLE = {Yago: a large ontology from Wikipedia and {WordNet}},
AUTHOR = {Suchanek, Fabian and Kasneci, Gjergji and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-5-003},
NUMBER = {MPI-I-2007-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95% -- as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Suchanek, Fabian
%A Kasneci, Gjergji
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Yago: a large ontology from Wikipedia and WordNet :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66CA-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-5-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 67 p.
%X This article presents YAGO, a large ontology with high coverage and precision.
YAGO has been automatically derived from Wikipedia and WordNet. It
comprises entities and relations, and currently contains more than 1.7
million
entities and 15 million facts. These include the taxonomic Is-A
hierarchy as well as semantic relations between entities. The facts
for YAGO have been extracted from the category system and the
infoboxes of Wikipedia and have been combined with taxonomic relations
from WordNet. Type checking techniques help us keep YAGO's precision
at 95% -- as proven by an extensive evaluation study. YAGO is based on
a clean logical model with a decidable consistency.
Furthermore, it allows representing n-ary relations in a natural way
while maintaining compatibility with RDFS. A powerful query model
facilitates access to YAGO's data.
%B Research Report / Max-Planck-Institut für Informatik
2006
Gesture modeling and animation by imitation
I. Albrecht, M. Kipp, M. P. Neff and H.-P. Seidel
Technical Report, 2006
I. Albrecht, M. Kipp, M. P. Neff and H.-P. Seidel
Technical Report, 2006
Abstract
Animated characters that move and gesticulate appropriately with spoken
text are useful in a wide range of applications. Unfortunately, they are
very difficult to generate, even more so when a unique, individual
movement style is required. We present a system that is capable of
producing full-body gesture animation for given input text in the style of
a particular performer. Our process starts with video of a performer whose
gesturing style we wish to animate. A tool-assisted annotation process is
first performed on the video, from which a statistical model of the
person.s particular gesturing style is built. Using this model and tagged
input text, our generation algorithm creates a gesture script appropriate
for the given text. As opposed to isolated singleton gestures, our gesture
script specifies a stream of continuous gestures coordinated with speech.
This script is passed to an animation system, which enhances the gesture
description with more detail and prepares a refined description of the
motion. An animation subengine can then generate either kinematic or
physically simulated motion based on this description. The system is
capable of creating animation that replicates a particular performance in
the video corpus, generating new animation for the spoken text that is
consistent with the given performer.s style and creating performances of a
given text sample in the style of different performers.
Export
BibTeX
@techreport{AlbrechtKippNeffSeidel2006,
TITLE = {Gesture modeling and animation by imitation},
AUTHOR = {Albrecht, Irene and Kipp, Michael and Neff, Michael Paul and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-008},
NUMBER = {MPI-I-2006-4-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very difficult to generate, even more so when a unique, individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which a statistical model of the person.s particular gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with more detail and prepares a refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The system is capable of creating animation that replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style and creating performances of a given text sample in the style of different performers.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albrecht, Irene
%A Kipp, Michael
%A Neff, Michael Paul
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Multimodal Computing and Interaction
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Gesture modeling and animation by imitation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6979-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 62 p.
%X Animated characters that move and gesticulate appropriately with spoken
text are useful in a wide range of applications. Unfortunately, they are
very difficult to generate, even more so when a unique, individual
movement style is required. We present a system that is capable of
producing full-body gesture animation for given input text in the style of
a particular performer. Our process starts with video of a performer whose
gesturing style we wish to animate. A tool-assisted annotation process is
first performed on the video, from which a statistical model of the
person.s particular gesturing style is built. Using this model and tagged
input text, our generation algorithm creates a gesture script appropriate
for the given text. As opposed to isolated singleton gestures, our gesture
script specifies a stream of continuous gestures coordinated with speech.
This script is passed to an animation system, which enhances the gesture
description with more detail and prepares a refined description of the
motion. An animation subengine can then generate either kinematic or
physically simulated motion based on this description. The system is
capable of creating animation that replicates a particular performance in
the video corpus, generating new animation for the spoken text that is
consistent with the given performer.s style and creating performances of a
given text sample in the style of different performers.
%B Research Report / Max-Planck-Institut für Informatik
A neighborhood-based approach for clustering of linked document collections
R. Angelova and S. Siersdorfer
Technical Report, 2006
R. Angelova and S. Siersdorfer
Technical Report, 2006
Abstract
This technical report addresses the problem of automatically structuring
linked document collections by using clustering. In contrast to
traditional clustering, we study the clustering problem in the light of
available link structure information for the data set
(e.g., hyperlinks among web documents or co-authorship among
bibliographic data entries).
Our approach is based on iterative relaxation of cluster assignments,
and can be built on top of any clustering algorithm (e.g., k-means or
DBSCAN). These techniques result in higher cluster purity, better
overall accuracy, and make self-organization more robust. Our
comprehensive experiments on three different real-world corpora
demonstrate the benefits of our approach.
Export
BibTeX
@techreport{AngelovaSiersdorfer2006,
TITLE = {A neighborhood-based approach for clustering of linked document collections},
AUTHOR = {Angelova, Ralitsa and Siersdorfer, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-005},
NUMBER = {MPI-I-2006-5-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {This technical report addresses the problem of automatically structuring linked document collections by using clustering. In contrast to traditional clustering, we study the clustering problem in the light of available link structure information for the data set (e.g., hyperlinks among web documents or co-authorship among bibliographic data entries). Our approach is based on iterative relaxation of cluster assignments, and can be built on top of any clustering algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity, better overall accuracy, and make self-organization more robust. Our comprehensive experiments on three different real-world corpora demonstrate the benefits of our approach.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Angelova, Ralitsa
%A Siersdorfer, Stefan
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A neighborhood-based approach for clustering of linked document collections :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-670D-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 32 p.
%X This technical report addresses the problem of automatically structuring
linked document collections by using clustering. In contrast to
traditional clustering, we study the clustering problem in the light of
available link structure information for the data set
(e.g., hyperlinks among web documents or co-authorship among
bibliographic data entries).
Our approach is based on iterative relaxation of cluster assignments,
and can be built on top of any clustering algorithm (e.g., k-means or
DBSCAN). These techniques result in higher cluster purity, better
overall accuracy, and make self-organization more robust. Our
comprehensive experiments on three different real-world corpora
demonstrate the benefits of our approach.
%B Research Report / Max-Planck-Institut für Informatik
Output-sensitive autocompletion search
H. Bast, I. Weber and C. W. Mortensen
Technical Report, 2006
H. Bast, I. Weber and C. W. Mortensen
Technical Report, 2006
Abstract
We consider the following autocompletion search scenario: imagine a user
of a search engine typing a query; then with every keystroke display those
completions of the last query word that would lead to the best hits, and
also display the best such hits. The following problem is at the core of
this feature: for a fixed document collection, given a set $D$ of
documents, and an alphabetical range $W$ of words, compute the set of all
word-in-document pairs $(w,d)$ from the collection such that $w \in W$
and $d\in D$.
We present a new data structure with the help of which such
autocompletion queries can be processed, on the average, in time linear
in the input plus output size, independent of the size of the underlying
document collection. At the same time, our data structure uses no more
space than an inverted index. Actual query processing times on a large test collection
correlate almost perfectly with our theoretical bound.
Export
BibTeX
@techreport{,
TITLE = {Output-sensitive autocompletion search},
AUTHOR = {Bast, Holger and Weber, Ingmar and Mortensen, Christian Worm},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007},
NUMBER = {MPI-I-2006-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bast, Holger
%A Weber, Ingmar
%A Mortensen, Christian Worm
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Output-sensitive autocompletion search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-681A-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 17 p.
%X We consider the following autocompletion search scenario: imagine a user
of a search engine typing a query; then with every keystroke display those
completions of the last query word that would lead to the best hits, and
also display the best such hits. The following problem is at the core of
this feature: for a fixed document collection, given a set $D$ of
documents, and an alphabetical range $W$ of words, compute the set of all
word-in-document pairs $(w,d)$ from the collection such that $w \in W$
and $d\in D$.
We present a new data structure with the help of which such
autocompletion queries can be processed, on the average, in time linear
in the input plus output size, independent of the size of the underlying
document collection. At the same time, our data structure uses no more
space than an inverted index. Actual query processing times on a large test collection
correlate almost perfectly with our theoretical bound.
%B Research Report / Max-Planck-Institut für Informatik
IO-Top-k: index-access optimized top-k query processing
H. Bast, D. Majumdar, R. Schenkel, C. Theobalt and G. Weikum
Technical Report, 2006
H. Bast, D. Majumdar, R. Schenkel, C. Theobalt and G. Weikum
Technical Report, 2006
Abstract
Top-k query processing is an important building block for ranked retrieval,
with applications ranging from text and data integration to distributed
aggregation of network logs and sensor data.
Top-k queries operate on index lists for a query's elementary conditions
and aggregate scores for result candidates. One of the best implementation
methods in this setting is the family of threshold algorithms, which aim
to terminate the index scans as early as possible based on lower and upper
bounds for the final scores of result candidates. This procedure
performs sequential disk accesses for sorted index scans, but also has the option
of performing random accesses to resolve score uncertainty. This entails
scheduling for the two kinds of accesses: 1) the prioritization of different
index lists in the sequential accesses, and 2) the decision on when to perform
random accesses and for which candidates.
The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation.
The current paper takes an integrated view of the scheduling issues and develops
novel strategies that outperform prior proposals by a large margin.
Our main contributions are new, principled, scheduling methods based on a Knapsack-related
optimization for sequential accesses and a cost model for random accesses.
The methods can be further boosted by harnessing probabilistic estimators for scores,
selectivities, and index list correlations.
We also discuss efficient implementation techniques for the
underlying data structures.
In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB),
our methods achieved significant performance gains compared to the best previously known methods:
a factor of up to 3 in terms of execution costs, and a factor of 5
in terms of absolute run-times of our implementation.
Our best techniques are close to a lower bound for the execution cost of the considered class
of threshold algorithms.
Export
BibTeX
@techreport{BastMajumdarSchenkelTheobaldWeikum2006,
TITLE = {{IO}-Top-k: index-access optimized top-k query processing},
AUTHOR = {Bast, Holger and Majumdar, Debapriyo and Schenkel, Ralf and Theobalt, Christian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002},
NUMBER = {MPI-I-2006-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bast, Holger
%A Majumdar, Debapriyo
%A Schenkel, Ralf
%A Theobalt, Christian
%A Weikum, Gerhard
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T IO-Top-k: index-access optimized top-k query processing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6716-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 49 p.
%X Top-k query processing is an important building block for ranked retrieval,
with applications ranging from text and data integration to distributed
aggregation of network logs and sensor data.
Top-k queries operate on index lists for a query's elementary conditions
and aggregate scores for result candidates. One of the best implementation
methods in this setting is the family of threshold algorithms, which aim
to terminate the index scans as early as possible based on lower and upper
bounds for the final scores of result candidates. This procedure
performs sequential disk accesses for sorted index scans, but also has the option
of performing random accesses to resolve score uncertainty. This entails
scheduling for the two kinds of accesses: 1) the prioritization of different
index lists in the sequential accesses, and 2) the decision on when to perform
random accesses and for which candidates.
The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation.
The current paper takes an integrated view of the scheduling issues and develops
novel strategies that outperform prior proposals by a large margin.
Our main contributions are new, principled, scheduling methods based on a Knapsack-related
optimization for sequential accesses and a cost model for random accesses.
The methods can be further boosted by harnessing probabilistic estimators for scores,
selectivities, and index list correlations.
We also discuss efficient implementation techniques for the
underlying data structures.
In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB),
our methods achieved significant performance gains compared to the best previously known methods:
a factor of up to 3 in terms of execution costs, and a factor of 5
in terms of absolute run-times of our implementation.
Our best techniques are close to a lower bound for the execution cost of the considered class
of threshold algorithms.
%B Research Report / Max-Planck-Institut für Informatik
Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$
A. Belyaev, T. Langer and H.-P. Seidel
Technical Report, 2006
A. Belyaev, T. Langer and H.-P. Seidel
Technical Report, 2006
Abstract
Since their introduction, mean value coordinates enjoy ever increasing
popularity in computer graphics and computational mathematics
because they exhibit a variety of good properties. Most importantly,
they are defined in the whole plane which allows interpolation and
extrapolation without restrictions. Recently, mean value coordinates
were generalized to spheres and to $\mathbb{R}^{3}$. We show that these
spherical and 3D mean value coordinates are well-defined on the whole
sphere and the whole space $\mathbb{R}^{3}$, respectively.
Export
BibTeX
@techreport{BelyaevLangerSeidel2006,
TITLE = {Mean value coordinates for arbitrary spherical polygons and polyhedra in \${\textbackslash}mathbb{\textbraceleft}R{\textbraceright}{\textasciicircum}{\textbraceleft}3{\textbraceright}\$},
AUTHOR = {Belyaev, Alexander and Langer, Torsten and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-010},
NUMBER = {MPI-I-2006-4-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Since their introduction, mean value coordinates enjoy ever increasing popularity in computer graphics and computational mathematics because they exhibit a variety of good properties. Most importantly, they are defined in the whole plane which allows interpolation and extrapolation without restrictions. Recently, mean value coordinates were generalized to spheres and to $\mathbb{R}^{3}$. We show that these spherical and 3D mean value coordinates are well-defined on the whole sphere and the whole space $\mathbb{R}^{3}$, respectively.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Belyaev, Alexander
%A Langer, Torsten
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$ :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-671C-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 19 p.
%X Since their introduction, mean value coordinates enjoy ever increasing
popularity in computer graphics and computational mathematics
because they exhibit a variety of good properties. Most importantly,
they are defined in the whole plane which allows interpolation and
extrapolation without restrictions. Recently, mean value coordinates
were generalized to spheres and to $\mathbb{R}^{3}$. We show that these
spherical and 3D mean value coordinates are well-defined on the whole
sphere and the whole space $\mathbb{R}^{3}$, respectively.
%B Research Report / Max-Planck-Institut für Informatik
Skeleton-driven Laplacian Mesh Deformations
A. Belyaev, S. Yoshizawa and H.-P. Seidel
Technical Report, 2006
A. Belyaev, S. Yoshizawa and H.-P. Seidel
Technical Report, 2006
Abstract
In this report, a new free-form shape deformation approach is proposed.
We combine a skeleton-driven mesh deformation technique with discrete
differential coordinates in order to create natural-looking global shape
deformations. Given a triangle mesh, we first extract a skeletal mesh, a
two-sided
Voronoi-based approximation of the medial axis. Next the skeletal mesh
is modified by free-form deformations. Then a desired global shape
deformation is obtained by reconstructing the shape corresponding to the
deformed skeletal mesh. The reconstruction is based on using discrete
differential coordinates.
Our method preserves fine geometric details and original shape
thickness because of using discrete differential coordinates and
skeleton-driven deformations. We also develop a new mesh evolution
technique which allow us to eliminate possible global and local
self-intersections of the deformed mesh while preserving fine geometric
details. Finally, we present a multiresolution version of our approach
in order to simplify and accelerate the deformation process.
Export
BibTeX
@techreport{BelyaevSeidelShin2006,
TITLE = {Skeleton-driven {Laplacian} Mesh Deformations},
AUTHOR = {Belyaev, Alexander and Yoshizawa, Shin and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-005},
NUMBER = {MPI-I-2006-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {In this report, a new free-form shape deformation approach is proposed. We combine a skeleton-driven mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-driven deformations. We also develop a new mesh evolution technique which allow us to eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we present a multiresolution version of our approach in order to simplify and accelerate the deformation process.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Belyaev, Alexander
%A Yoshizawa, Shin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Skeleton-driven Laplacian Mesh Deformations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-67FF-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 37 p.
%X In this report, a new free-form shape deformation approach is proposed.
We combine a skeleton-driven mesh deformation technique with discrete
differential coordinates in order to create natural-looking global shape
deformations. Given a triangle mesh, we first extract a skeletal mesh, a
two-sided
Voronoi-based approximation of the medial axis. Next the skeletal mesh
is modified by free-form deformations. Then a desired global shape
deformation is obtained by reconstructing the shape corresponding to the
deformed skeletal mesh. The reconstruction is based on using discrete
differential coordinates.
Our method preserves fine geometric details and original shape
thickness because of using discrete differential coordinates and
skeleton-driven deformations. We also develop a new mesh evolution
technique which allow us to eliminate possible global and local
self-intersections of the deformed mesh while preserving fine geometric
details. Finally, we present a multiresolution version of our approach
in order to simplify and accelerate the deformation process.
%B Research Report / Max-Planck-Institut für Informatik
Overlap-aware global df estimation in distributed information retrieval systems
M. Bender, S. Michel, G. Weikum and P. Triantafilou
Technical Report, 2006
M. Bender, S. Michel, G. Weikum and P. Triantafilou
Technical Report, 2006
Abstract
Peer-to-Peer (P2P) search engines and other forms of distributed
information retrieval (IR) are gaining momentum. Unlike in centralized
IR, it is difficult and expensive to compute statistical measures about
the entire document collection as it is widely distributed across many
computers in a highly dynamic network. On the other hand, such
network-wide statistics, most notably, global document frequencies of
the individual terms, would be highly beneficial for ranking global
search results that are compiled from different peers.
This paper develops an efficient and scalable method for estimating
global document frequencies in a large-scale, highly dynamic P2P network
with autonomous peers. The main difficulty that is addressed in this
paper is that the local collections of different peers
may arbitrarily overlap, as many peers may choose to gather popular
documents that fall into their specific interest profile.
Our method is based on hash sketches as an underlying technique for
compact data synopses, and exploits specific properties of hash sketches
for duplicate elimination in the counting process.
We report on experiments with real Web data that demonstrate the
accuracy of our estimation method and also the benefit for better search
result ranking.
Export
BibTeX
@techreport{BenderMichelWeikumTriantafilou2006,
TITLE = {Overlap-aware global df estimation in distributed information retrieval systems},
AUTHOR = {Bender, Matthias and Michel, Sebastian and Weikum, Gerhard and Triantafilou, Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-001},
NUMBER = {MPI-I-2006-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Peer-to-Peer (P2P) search engines and other forms of distributed information retrieval (IR) are gaining momentum. Unlike in centralized IR, it is difficult and expensive to compute statistical measures about the entire document collection as it is widely distributed across many computers in a highly dynamic network. On the other hand, such network-wide statistics, most notably, global document frequencies of the individual terms, would be highly beneficial for ranking global search results that are compiled from different peers. This paper develops an efficient and scalable method for estimating global document frequencies in a large-scale, highly dynamic P2P network with autonomous peers. The main difficulty that is addressed in this paper is that the local collections of different peers may arbitrarily overlap, as many peers may choose to gather popular documents that fall into their specific interest profile. Our method is based on hash sketches as an underlying technique for compact data synopses, and exploits specific properties of hash sketches for duplicate elimination in the counting process. We report on experiments with real Web data that demonstrate the accuracy of our estimation method and also the benefit for better search result ranking.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bender, Matthias
%A Michel, Sebastian
%A Weikum, Gerhard
%A Triantafilou, Peter
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Overlap-aware global df estimation in distributed information retrieval systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6719-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 25 p.
%X Peer-to-Peer (P2P) search engines and other forms of distributed
information retrieval (IR) are gaining momentum. Unlike in centralized
IR, it is difficult and expensive to compute statistical measures about
the entire document collection as it is widely distributed across many
computers in a highly dynamic network. On the other hand, such
network-wide statistics, most notably, global document frequencies of
the individual terms, would be highly beneficial for ranking global
search results that are compiled from different peers.
This paper develops an efficient and scalable method for estimating
global document frequencies in a large-scale, highly dynamic P2P network
with autonomous peers. The main difficulty that is addressed in this
paper is that the local collections of different peers
may arbitrarily overlap, as many peers may choose to gather popular
documents that fall into their specific interest profile.
Our method is based on hash sketches as an underlying technique for
compact data synopses, and exploits specific properties of hash sketches
for duplicate elimination in the counting process.
We report on experiments with real Web data that demonstrate the
accuracy of our estimation method and also the benefit for better search
result ranking.
%B Research Report / Max-Planck-Institut für Informatik
Definition of File Format for Benchmark Instances for Arrangements of Quadrics
E. Berberich, F. Ebert and L. Kettner
Technical Report, 2006
E. Berberich, F. Ebert and L. Kettner
Technical Report, 2006
Export
BibTeX
@techreport{acs:bek-dffbiaq-06,
TITLE = {Definition of File Format for Benchmark Instances for Arrangements of Quadrics},
AUTHOR = {Berberich, Eric and Ebert, Franziska and Kettner, Lutz},
LANGUAGE = {eng},
NUMBER = {ACS-TR-123109-01},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2006},
DATE = {2006},
}
Endnote
%0 Report
%A Berberich, Eric
%A Ebert, Franziska
%A Kettner, Lutz
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Definition of File Format for Benchmark Instances for Arrangements of Quadrics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E509-E
%Y University of Groningen
%C Groningen, The Netherlands
%D 2006
Web-site with Benchmark Instances for Planar Curve Arrangements
E. Berberich, F. Ebert, E. Fogel and L. Kettner
Technical Report, 2006
E. Berberich, F. Ebert, E. Fogel and L. Kettner
Technical Report, 2006
Export
BibTeX
@techreport{acs:bek-wbipca-06,
TITLE = {Web-site with Benchmark Instances for Planar Curve Arrangements},
AUTHOR = {Berberich, Eric and Ebert, Franziska and Fogel, Efi and Kettner, Lutz},
LANGUAGE = {eng},
NUMBER = {ACS-TR-123108-01},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2006},
DATE = {2006},
}
Endnote
%0 Report
%A Berberich, Eric
%A Ebert, Franziska
%A Fogel, Efi
%A Kettner, Lutz
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Web-site with Benchmark Instances for Planar Curve Arrangements :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E515-1
%Y University of Groningen
%C Groningen, The Netherlands
%D 2006
A framework for natural animation of digitized models
E. de Aguiar, R. Zayer, C. Theobalt, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
E. de Aguiar, R. Zayer, C. Theobalt, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
Abstract
We present a novel versatile, fast and simple framework to generate
highquality animations of scanned human characters from input motion data.
Our method is purely mesh-based and, in contrast to skeleton-based
animation, requires only a minimum of manual interaction. The only manual
step that is required to create moving virtual people is the placement of
a sparse set of correspondences between triangles of an input mesh and
triangles of the mesh to be animated. The proposed algorithm implicitly
generates realistic body deformations, and can easily transfer motions
between human erent shape and proportions. erent types of input data, e.g.
other animated meshes and motion capture les, in just the same way.
Finally, and most importantly, it creates animations at interactive frame
rates. We feature two working prototype systems that demonstrate that our
method can generate lifelike character animations from both marker-based
and marker-less optical motion capture data.
Export
BibTeX
@techreport{deAguiarZayerTheobaltMagnorSeidel2006,
TITLE = {A framework for natural animation of digitized models},
AUTHOR = {de Aguiar, Edilson and Zayer, Rhaleb and Theobalt, Christian and Magnor, Marcus A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-003},
NUMBER = {MPI-I-2006-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A de Aguiar, Edilson
%A Zayer, Rhaleb
%A Theobalt, Christian
%A Magnor, Marcus A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A framework for natural animation of digitized models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-680B-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 27 p.
%X We present a novel versatile, fast and simple framework to generate
highquality animations of scanned human characters from input motion data.
Our method is purely mesh-based and, in contrast to skeleton-based
animation, requires only a minimum of manual interaction. The only manual
step that is required to create moving virtual people is the placement of
a sparse set of correspondences between triangles of an input mesh and
triangles of the mesh to be animated. The proposed algorithm implicitly
generates realistic body deformations, and can easily transfer motions
between human erent shape and proportions. erent types of input data, e.g.
other animated meshes and motion capture les, in just the same way.
Finally, and most importantly, it creates animations at interactive frame
rates. We feature two working prototype systems that demonstrate that our
method can generate lifelike character animations from both marker-based
and marker-less optical motion capture data.
%B Research Report / Max-Planck-Institut für Informatik
Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding
B. Doerr and M. Gnewuch
Technical Report, 2006
B. Doerr and M. Gnewuch
Technical Report, 2006
Abstract
We provide a deterministic algorithm that constructs small point sets
exhibiting a low star discrepancy. The algorithm is based on bracketing and on
recent results on randomized roundings respecting hard constraints. It is
structurally much simpler than the previous algorithm presented for this
problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for
the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides
leading to better theoretical run time bounds, our approach can be implemented
with reasonable effort.
Export
BibTeX
@techreport{SemKiel,
TITLE = {Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding},
AUTHOR = {Doerr, Benjamin and Gnewuch, Michael},
LANGUAGE = {eng},
NUMBER = {06-14},
INSTITUTION = {University Kiel},
ADDRESS = {Kiel},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.},
}
Endnote
%0 Report
%A Doerr, Benjamin
%A Gnewuch, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E49F-6
%Y University Kiel
%C Kiel
%D 2006
%X We provide a deterministic algorithm that constructs small point sets
exhibiting a low star discrepancy. The algorithm is based on bracketing and on
recent results on randomized roundings respecting hard constraints. It is
structurally much simpler than the previous algorithm presented for this
problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for
the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides
leading to better theoretical run time bounds, our approach can be implemented
with reasonable effort.
Design and evaluation of backward compatible high dynamic range video compression
A. Efremov, R. Mantiuk, K. Myszkowski and H.-P. Seidel
Technical Report, 2006
A. Efremov, R. Mantiuk, K. Myszkowski and H.-P. Seidel
Technical Report, 2006
Abstract
In this report we describe the details of the backward compatible high
dynamic range (HDR) video compression algorithm. The algorithm is
designed to facilitate a smooth transition from standard low dynamic
range (LDR) video to high fidelity high dynamic range content. The HDR
and the corresponding LDR video frames are decorrelated and then
compressed into a single MPEG stream, which can be played on both
existing DVD players and HDR-enabled devices.
Export
BibTeX
@techreport{EfremovMantiukMyszkowskiSeidel,
TITLE = {Design and evaluation of backward compatible high dynamic range video compression},
AUTHOR = {Efremov, Alexander and Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001},
NUMBER = {MPI-I-2006-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Efremov, Alexander
%A Mantiuk, Rafal
%A Myszkowski, Karol
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Design and evaluation of backward compatible high dynamic range video compression :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6811-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 50 p.
%X In this report we describe the details of the backward compatible high
dynamic range (HDR) video compression algorithm. The algorithm is
designed to facilitate a smooth transition from standard low dynamic
range (LDR) video to high fidelity high dynamic range content. The HDR
and the corresponding LDR video frames are decorrelated and then
compressed into a single MPEG stream, which can be played on both
existing DVD players and HDR-enabled devices.
%B Research Report / Max-Planck-Institut für Informatik
On the Complexity of Monotone Boolean Duality Testing
K. Elbassioni
Technical Report, 2006
K. Elbassioni
Technical Report, 2006
Abstract
We show that the duality of a pair of monotone Boolean functions in disjunctive
normal forms can be tested in polylogarithmic time using a quasi-polynomial
number of processors. Our decomposition technique yields stronger bounds on the
complexity of the problem than those currently known and also allows for
generating all minimal transversals of a given hypergraph using only polynomial
space.
Export
BibTeX
@techreport{Elbassioni2006,
TITLE = {On the Complexity of Monotone {Boolean} Duality Testing},
AUTHOR = {Elbassioni, Khaled},
LANGUAGE = {eng},
NUMBER = {DIMACS TR: 2006-01},
INSTITUTION = {DIMACS},
ADDRESS = {Piscataway, NJ},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.},
}
Endnote
%0 Report
%A Elbassioni, Khaled
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Complexity of Monotone Boolean Duality Testing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E4CA-2
%Y DIMACS
%C Piscataway, NJ
%D 2006
%X We show that the duality of a pair of monotone Boolean functions in disjunctive
normal forms can be tested in polylogarithmic time using a quasi-polynomial
number of processors. Our decomposition technique yields stronger bounds on the
complexity of the problem than those currently known and also allows for
generating all minimal transversals of a given hypergraph using only polynomial
space.
Controlled Perturbation for Delaunay Triangulations
S. Funke, C. Klein, K. Mehlhorn and S. Schmitt
Technical Report, 2006
S. Funke, C. Klein, K. Mehlhorn and S. Schmitt
Technical Report, 2006
Export
BibTeX
@techreport{acstr123109-01,
TITLE = {Controlled Perturbation for Delaunay Triangulations},
AUTHOR = {Funke, Stefan and Klein, Christian and Mehlhorn, Kurt and Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ACS-TR-121103-03},
INSTITUTION = {Algorithms for Complex Shapes with certified topology and numerics},
ADDRESS = {Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS},
YEAR = {2006},
DATE = {2006},
}
Endnote
%0 Report
%A Funke, Stefan
%A Klein, Christian
%A Mehlhorn, Kurt
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Controlled Perturbation for Delaunay Triangulations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-F72F-3
%Y Algorithms for Complex Shapes with certified topology and numerics
%C Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS
%D 2006
Power assignment problems in wireless communication
S. Funke, S. Laue, R. Naujoks and L. Zvi
Technical Report, 2006
S. Funke, S. Laue, R. Naujoks and L. Zvi
Technical Report, 2006
Abstract
A fundamental class of problems in wireless communication is concerned
with the assignment of suitable transmission powers to wireless
devices/stations such that the
resulting communication graph satisfies certain desired properties and
the overall energy consumed is minimized. Many concrete communication
tasks in a
wireless network like broadcast, multicast, point-to-point routing,
creation of a communication backbone, etc. can be regarded as such a
power assignment problem.
This paper considers several problems of that kind; for example one
problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering
to Minimize the Sum
of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost
coverage of point sets by disks, SCG 2006) aims to select and assign
powers to $k$ of the
stations such that all other stations are within reach of at least one
of the selected stations. We improve the running time for obtaining a
$(1+\epsilon)$-approximate
solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as
reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric
Clustering to Minimize the Sum
of Cluster Sizes, ESA 2005) to
$O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\;
2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a
running time that is \emph{linear}
in the network size. Further results include a constant approximation
algorithm for the TSP problem under squared (non-metric!) edge costs,
which can be employed to
implement a novel data aggregation protocol, as well as efficient
schemes to perform $k$-hop multicasts.
Export
BibTeX
@techreport{,
TITLE = {Power assignment problems in wireless communication},
AUTHOR = {Funke, Stefan and Laue, S{\"o}ren and Naujoks, Rouven and Zvi, Lotker},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004},
NUMBER = {MPI-I-2006-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Funke, Stefan
%A Laue, Sören
%A Naujoks, Rouven
%A Zvi, Lotker
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Power assignment problems in wireless communication :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6820-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 25 p.
%X A fundamental class of problems in wireless communication is concerned
with the assignment of suitable transmission powers to wireless
devices/stations such that the
resulting communication graph satisfies certain desired properties and
the overall energy consumed is minimized. Many concrete communication
tasks in a
wireless network like broadcast, multicast, point-to-point routing,
creation of a communication backbone, etc. can be regarded as such a
power assignment problem.
This paper considers several problems of that kind; for example one
problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering
to Minimize the Sum
of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost
coverage of point sets by disks, SCG 2006) aims to select and assign
powers to $k$ of the
stations such that all other stations are within reach of at least one
of the selected stations. We improve the running time for obtaining a
$(1+\epsilon)$-approximate
solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as
reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric
Clustering to Minimize the Sum
of Cluster Sizes, ESA 2005) to
$O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\;
2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a
running time that is \emph{linear}
in the network size. Further results include a constant approximation
algorithm for the TSP problem under squared (non-metric!) edge costs,
which can be employed to
implement a novel data aggregation protocol, as well as efficient
schemes to perform $k$-hop multicasts.
%B Research Report / Max-Planck-Institut für Informatik
On fast construction of spatial hierarchies for ray tracing
V. Havran, R. Herzog and H.-P. Seidel
Technical Report, 2006
V. Havran, R. Herzog and H.-P. Seidel
Technical Report, 2006
Abstract
In this paper we address the problem of fast construction of spatial
hierarchies for ray tracing with applications in animated environments
including non-rigid animations. We discuss properties of currently
used techniques with $O(N \log N)$ construction time for kd-trees and
bounding volume hierarchies. Further, we propose a hybrid data
structure blending between a spatial kd-tree and bounding volume
primitives. We keep our novel hierarchical data structures
algorithmically efficient and comparable with kd-trees by the use of a
cost model based on surface area heuristics. Although the time
complexity $O(N \log N)$ is a lower bound required for construction of
any spatial hierarchy that corresponds to sorting based on
comparisons, using approximate method based on discretization we
propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.
Export
BibTeX
@techreport{HavranHerzogSeidel2006,
TITLE = {On fast construction of spatial hierarchies for ray tracing},
AUTHOR = {Havran, Vlastimil and Herzog, Robert and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-004},
NUMBER = {MPI-I-2006-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {In this paper we address the problem of fast construction of spatial hierarchies for ray tracing with applications in animated environments including non-rigid animations. We discuss properties of currently used techniques with $O(N \log N)$ construction time for kd-trees and bounding volume hierarchies. Further, we propose a hybrid data structure blending between a spatial kd-tree and bounding volume primitives. We keep our novel hierarchical data structures algorithmically efficient and comparable with kd-trees by the use of a cost model based on surface area heuristics. Although the time complexity $O(N \log N)$ is a lower bound required for construction of any spatial hierarchy that corresponds to sorting based on comparisons, using approximate method based on discretization we propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Havran, Vlastimil
%A Herzog, Robert
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T On fast construction of spatial hierarchies for ray tracing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6807-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 40 p.
%X In this paper we address the problem of fast construction of spatial
hierarchies for ray tracing with applications in animated environments
including non-rigid animations. We discuss properties of currently
used techniques with $O(N \log N)$ construction time for kd-trees and
bounding volume hierarchies. Further, we propose a hybrid data
structure blending between a spatial kd-tree and bounding volume
primitives. We keep our novel hierarchical data structures
algorithmically efficient and comparable with kd-trees by the use of a
cost model based on surface area heuristics. Although the time
complexity $O(N \log N)$ is a lower bound required for construction of
any spatial hierarchy that corresponds to sorting based on
comparisons, using approximate method based on discretization we
propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.
%B Research Report / Max-Planck-Institut für Informatik
Yago - a core of semantic knowledge
G. Kasneci, F. Suchanek and G. Weikum
Technical Report, 2006
G. Kasneci, F. Suchanek and G. Weikum
Technical Report, 2006
Abstract
We present YAGO, a light-weight and extensible ontology with high coverage and quality.
YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts.
This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}).
The facts have been automatically extracted from the unification of Wikipedia and WordNet,
using a carefully designed combination of rule-based and heuristic methods described in this paper.
The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about
individuals like persons, organizations, products, etc. with their semantic relationships --
and in quantity by increasing the number of facts by more than an order of magnitude.
Our empirical evaluation of fact correctness shows an accuracy of about 95%.
YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS.
Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
Export
BibTeX
@techreport{,
TITLE = {Yago -- a core of semantic knowledge},
AUTHOR = {Kasneci, Gjergji and Suchanek, Fabian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-006},
NUMBER = {MPI-I-2006-5-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}). The facts have been automatically extracted from the unification of Wikipedia and WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships -- and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Suchanek, Fabian
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Yago - a core of semantic knowledge :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-670A-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 39 p.
%X We present YAGO, a light-weight and extensible ontology with high coverage and quality.
YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts.
This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}).
The facts have been automatically extracted from the unification of Wikipedia and WordNet,
using a carefully designed combination of rule-based and heuristic methods described in this paper.
The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about
individuals like persons, organizations, products, etc. with their semantic relationships --
and in quantity by increasing the number of facts by more than an order of magnitude.
Our empirical evaluation of fact correctness shows an accuracy of about 95%.
YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS.
Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
%B Research Report / Max-Planck-Institut für Informatik
Division-free computation of subresultants using bezout matrices
M. Kerber
Technical Report, 2006
M. Kerber
Technical Report, 2006
Abstract
We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
Export
BibTeX
@techreport{,
TITLE = {Division-free computation of subresultants using bezout matrices},
AUTHOR = {Kerber, Michael},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006},
NUMBER = {MPI-I-2006-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kerber, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Division-free computation of subresultants using bezout matrices :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-681D-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 20 p.
%X We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
%B Research Report / Max-Planck-Institut für Informatik
Exploiting Community Behavior for Enhanced Link Analysis and Web Search
J. Luxenburger and G. Weikum
Technical Report, 2006
J. Luxenburger and G. Weikum
Technical Report, 2006
Abstract
Methods for Web link analysis and authority ranking such as PageRank are based
on the assumption that a user endorses a Web page when creating a hyperlink to
this page. There is a wealth of additional user-behavior information that could
be considered for improving authority analysis, for example, the history of
queries that a user community posed to a search engine over an extended time
period, or observations about which query-result pages were clicked on and
which ones were not clicked on after a user saw the summary snippets of the
top-10 results. This paper enhances link analysis methods by incorporating
additional user assessments based on query logs and click streams, including
negative feedback when a query-result page does not satisfy the user demand or
is even perceived as spam. Our methods use various novel forms of advanced
Markov models whose states correspond to users and queries in addition to Web
pages and whose links also reflect the relationships derived from query-result
clicks, query refinements, and explicit ratings. Preliminary experiments are
presented as a proof of concept.
Export
BibTeX
@techreport{TechReportDelis0447_2006,
TITLE = {Exploiting Community Behavior for Enhanced Link Analysis and Web Search},
AUTHOR = {Luxenburger, Julia and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {DELIS-TR-0447},
INSTITUTION = {University of Paderborn, Heinz Nixdorf Institute},
ADDRESS = {Paderborn, Germany},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Methods for Web link analysis and authority ranking such as PageRank are based on the assumption that a user endorses a Web page when creating a hyperlink to this page. There is a wealth of additional user-behavior information that could be considered for improving authority analysis, for example, the history of queries that a user community posed to a search engine over an extended time period, or observations about which query-result pages were clicked on and which ones were not clicked on after a user saw the summary snippets of the top-10 results. This paper enhances link analysis methods by incorporating additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the user demand or is even perceived as spam. Our methods use various novel forms of advanced Markov models whose states correspond to users and queries in addition to Web pages and whose links also reflect the relationships derived from query-result clicks, query refinements, and explicit ratings. Preliminary experiments are presented as a proof of concept.},
}
Endnote
%0 Report
%A Luxenburger, Julia
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Exploiting Community Behavior for Enhanced Link Analysis and Web Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-BC47-9
%Y University of Paderborn, Heinz Nixdorf Institute
%C Paderborn, Germany
%D 2006
%X Methods for Web link analysis and authority ranking such as PageRank are based
on the assumption that a user endorses a Web page when creating a hyperlink to
this page. There is a wealth of additional user-behavior information that could
be considered for improving authority analysis, for example, the history of
queries that a user community posed to a search engine over an extended time
period, or observations about which query-result pages were clicked on and
which ones were not clicked on after a user saw the summary snippets of the
top-10 results. This paper enhances link analysis methods by incorporating
additional user assessments based on query logs and click streams, including
negative feedback when a query-result page does not satisfy the user demand or
is even perceived as spam. Our methods use various novel forms of advanced
Markov models whose states correspond to users and queries in addition to Web
pages and whose links also reflect the relationships derived from query-result
clicks, query refinements, and explicit ratings. Preliminary experiments are
presented as a proof of concept.
Feature-preserving non-local denoising of static and time-varying range data
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2006
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2006
Abstract
We present a novel algorithm for accurately denoising static and
time-varying range data. Our approach is inspired by
similarity-based non-local image filtering. We show that our
proposed method is easy to implement and outperforms recent
state-of-the-art filtering approaches. Furthermore, it preserves fine
shape features and produces an accurate smoothing result in the
spatial and along the time domain.
Export
BibTeX
@techreport{SchallBelyaevSeidel2006,
TITLE = {Feature-preserving non-local denoising of static and time-varying range data},
AUTHOR = {Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-007},
NUMBER = {MPI-I-2006-4-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present a novel algorithm for accurately denoising static and time-varying range data. Our approach is inspired by similarity-based non-local image filtering. We show that our proposed method is easy to implement and outperforms recent state-of-the-art filtering approaches. Furthermore, it preserves fine shape features and produces an accurate smoothing result in the spatial and along the time domain.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schall, Oliver
%A Belyaev, Alexander
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Feature-preserving non-local denoising of static and time-varying
range data :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-673D-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 22 p.
%X We present a novel algorithm for accurately denoising static and
time-varying range data. Our approach is inspired by
similarity-based non-local image filtering. We show that our
proposed method is easy to implement and outperforms recent
state-of-the-art filtering approaches. Furthermore, it preserves fine
shape features and produces an accurate smoothing result in the
spatial and along the time domain.
%B Research Report / Max-Planck-Institut für Informatik
Combining linguistic and statistical analysis to extract relations from web documents
F. Suchanek, G. Ifrim and G. Weikum
Technical Report, 2006
F. Suchanek, G. Ifrim and G. Weikum
Technical Report, 2006
Abstract
Search engines, question answering systems and classification systems
alike can greatly profit from formalized world knowledge.
Unfortunately, manually compiled collections of world knowledge (such
as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer
from low coverage, high assembling costs and fast aging. In contrast,
the World Wide Web provides an endless source of knowledge, assembled
by millions of people, updated constantly and available for free. In
this paper, we propose a novel method for learning arbitrary binary
relations from natural language Web documents, without human
interaction. Our system, LEILA, combines linguistic analysis and
machine learning techniques to find robust patterns in the text and to
generalize them. For initialization, we only require a set of examples
of the target relation and a set of counterexamples (e.g. from
WordNet). The architecture consists of 3 stages: Finding patterns in
the corpus based on the given examples, assessing the patterns based on
probabilistic confidence, and applying the generalized patterns to
propose pairs for the target relation. We prove the benefits and
practical viability of our approach by extensive experiments, showing
that LEILA achieves consistent improvements over existing comparable
techniques (e.g. Snowball, TextToOnto).
Export
BibTeX
@techreport{Suchanek2006,
TITLE = {Combining linguistic and statistical analysis to extract relations from web documents},
AUTHOR = {Suchanek, Fabian and Ifrim, Georgiana and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-004},
NUMBER = {MPI-I-2006-5-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Search engines, question answering systems and classification systems alike can greatly profit from formalized world knowledge. Unfortunately, manually compiled collections of world knowledge (such as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer from low coverage, high assembling costs and fast aging. In contrast, the World Wide Web provides an endless source of knowledge, assembled by millions of people, updated constantly and available for free. In this paper, we propose a novel method for learning arbitrary binary relations from natural language Web documents, without human interaction. Our system, LEILA, combines linguistic analysis and machine learning techniques to find robust patterns in the text and to generalize them. For initialization, we only require a set of examples of the target relation and a set of counterexamples (e.g. from WordNet). The architecture consists of 3 stages: Finding patterns in the corpus based on the given examples, assessing the patterns based on probabilistic confidence, and applying the generalized patterns to propose pairs for the target relation. We prove the benefits and practical viability of our approach by extensive experiments, showing that LEILA achieves consistent improvements over existing comparable techniques (e.g. Snowball, TextToOnto).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Suchanek, Fabian
%A Ifrim, Georgiana
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Combining linguistic and statistical analysis to extract relations from web documents :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6710-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 37 p.
%X Search engines, question answering systems and classification systems
alike can greatly profit from formalized world knowledge.
Unfortunately, manually compiled collections of world knowledge (such
as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer
from low coverage, high assembling costs and fast aging. In contrast,
the World Wide Web provides an endless source of knowledge, assembled
by millions of people, updated constantly and available for free. In
this paper, we propose a novel method for learning arbitrary binary
relations from natural language Web documents, without human
interaction. Our system, LEILA, combines linguistic analysis and
machine learning techniques to find robust patterns in the text and to
generalize them. For initialization, we only require a set of examples
of the target relation and a set of counterexamples (e.g. from
WordNet). The architecture consists of 3 stages: Finding patterns in
the corpus based on the given examples, assessing the patterns based on
probabilistic confidence, and applying the generalized patterns to
propose pairs for the target relation. We prove the benefits and
practical viability of our approach by extensive experiments, showing
that LEILA achieves consistent improvements over existing comparable
techniques (e.g. Snowball, TextToOnto).
%B Research Report / Max-Planck-Institut für Informatik
Enhanced dynamic reflectometry for relightable free-viewpoint video
C. Theobalt, N. Ahmed, H. P. A. Lensch, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
C. Theobalt, N. Ahmed, H. P. A. Lensch, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
Abstract
Free-Viewpoint Video of Human Actors allows photo-
realistic rendering of real-world people under novel viewing
conditions. Dynamic Reflectometry extends the concept of free-view point
video
and allows rendering in addition under novel lighting
conditions. In this work, we present an enhanced method for capturing
human shape and motion as well as dynamic surface reflectance
properties from a sparse set of input video streams.
We augment our initial method for model-based relightable
free-viewpoint video in several ways. Firstly, a single-skin
mesh is introduced for the continuous appearance of the model.
Moreover an algorithm to detect and
compensate lateral shifting of textiles in order to improve
temporal texture registration is presented. Finally, a
structured resampling approach is introduced which enables
reliable estimation of spatially varying surface reflectance
despite a static recording setup.
The new algorithm ingredients along with the Relightable 3D
Video framework enables us to realistically reproduce the
appearance of animated virtual actors under different
lighting conditions, as well as to interchange surface
attributes among different people, e.g. for virtual
dressing. Our contribution can be used to create 3D
renditions of real-world people under arbitrary novel
lighting conditions on standard graphics hardware.
Export
BibTeX
@techreport{TheobaltAhmedLenschMagnorSeidel2006,
TITLE = {Enhanced dynamic reflectometry for relightable free-viewpoint video},
AUTHOR = {Theobalt, Christian and Ahmed, Naveed and Lensch, Hendrik P. A. and Magnor, Marcus A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-006},
NUMBER = {MPI-I-2006-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Theobalt, Christian
%A Ahmed, Naveed
%A Lensch, Hendrik P. A.
%A Magnor, Marcus A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Enhanced dynamic reflectometry for relightable free-viewpoint video :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-67F4-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 37 p.
%X Free-Viewpoint Video of Human Actors allows photo-
realistic rendering of real-world people under novel viewing
conditions. Dynamic Reflectometry extends the concept of free-view point
video
and allows rendering in addition under novel lighting
conditions. In this work, we present an enhanced method for capturing
human shape and motion as well as dynamic surface reflectance
properties from a sparse set of input video streams.
We augment our initial method for model-based relightable
free-viewpoint video in several ways. Firstly, a single-skin
mesh is introduced for the continuous appearance of the model.
Moreover an algorithm to detect and
compensate lateral shifting of textiles in order to improve
temporal texture registration is presented. Finally, a
structured resampling approach is introduced which enables
reliable estimation of spatially varying surface reflectance
despite a static recording setup.
The new algorithm ingredients along with the Relightable 3D
Video framework enables us to realistically reproduce the
appearance of animated virtual actors under different
lighting conditions, as well as to interchange surface
attributes among different people, e.g. for virtual
dressing. Our contribution can be used to create 3D
renditions of real-world people under arbitrary novel
lighting conditions on standard graphics hardware.
%B Research Report / Max-Planck-Institut für Informatik
GPU point list generation through histogram pyramids
G. Ziegler, A. Tevs, C. Theobalt and H.-P. Seidel
Technical Report, 2006
G. Ziegler, A. Tevs, C. Theobalt and H.-P. Seidel
Technical Report, 2006
Abstract
Image Pyramids are frequently used in porting non-local algorithms to graphics
hardware. A Histogram pyramid (short: HistoPyramid), a special version
of image pyramid, sums up the number of active entries in a 2D
image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data
structure, allowing us to convert a sparse matrix into a coordinate list
of active cell entries (a point list) on graphics hardware . The algorithm
reduces a highly sparse matrix with N elements to a list of its M active
entries in O(N) + M (log N) steps, despite the restricted graphics
hardware architecture. Applications are numerous, including feature
detection, pixel classification and binning, conversion of 3D volumes to
particle clouds and sparse matrix compression.
Export
BibTeX
@techreport{OhtakeBelyaevSeidel2004,
TITLE = {{GPU} point list generation through histogram pyramids},
AUTHOR = {Ziegler, Gernot and Tevs, Art and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-002},
NUMBER = {MPI-I-2006-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a special version of image pyramid, sums up the number of active entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to convert a sparse matrix into a coordinate list of active cell entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active entries in O(N) + M (log N) steps, despite the restricted graphics hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D volumes to particle clouds and sparse matrix compression.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Ziegler, Gernot
%A Tevs, Art
%A Theobalt, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T GPU point list generation through histogram pyramids :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-680E-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 13 p.
%X Image Pyramids are frequently used in porting non-local algorithms to graphics
hardware. A Histogram pyramid (short: HistoPyramid), a special version
of image pyramid, sums up the number of active entries in a 2D
image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data
structure, allowing us to convert a sparse matrix into a coordinate list
of active cell entries (a point list) on graphics hardware . The algorithm
reduces a highly sparse matrix with N elements to a list of its M active
entries in O(N) + M (log N) steps, despite the restricted graphics
hardware architecture. Applications are numerous, including feature
detection, pixel classification and binning, conversion of 3D volumes to
particle clouds and sparse matrix compression.
%B Research Report / Max-Planck-Institut für Informatik
2005
Improved algorithms for all-pairs approximate shortest paths in weighted graphs
S. Baswana and K. Telikepalli
Technical Report, 2005
S. Baswana and K. Telikepalli
Technical Report, 2005
Abstract
The all-pairs approximate shortest-paths problem is an interesting
variant of the classical all-pairs shortest-paths problem in graphs.
The problem aims at building a data-structure for a given graph
with the following two features. Firstly, for any two vertices,
it should report an {\emph{approximate}} shortest path between them,
that is, a path which is longer than the shortest path
by some {\emph{small}} factor. Secondly, the data-structure should require
less preprocessing time (strictly sub-cubic) and occupy optimal space
(sub-quadratic), at the cost of this approximation.
In this paper, we present algorithms for computing all-pairs approximate
shortest paths in a weighted undirected graph. These algorithms significantly
improve the existing results for this problem.
Export
BibTeX
@techreport{,
TITLE = {Improved algorithms for all-pairs approximate shortest paths in weighted graphs},
AUTHOR = {Baswana, Surender and Telikepalli, Kavitha},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003},
NUMBER = {MPI-I-2005-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Baswana, Surender
%A Telikepalli, Kavitha
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Improved algorithms for all-pairs approximate shortest paths in weighted graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6854-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 26 p.
%X The all-pairs approximate shortest-paths problem is an interesting
variant of the classical all-pairs shortest-paths problem in graphs.
The problem aims at building a data-structure for a given graph
with the following two features. Firstly, for any two vertices,
it should report an {\emph{approximate}} shortest path between them,
that is, a path which is longer than the shortest path
by some {\emph{small}} factor. Secondly, the data-structure should require
less preprocessing time (strictly sub-cubic) and occupy optimal space
(sub-quadratic), at the cost of this approximation.
In this paper, we present algorithms for computing all-pairs approximate
shortest paths in a weighted undirected graph. These algorithms significantly
improve the existing results for this problem.
%B Research Report / Max-Planck-Institut für Informatik
STXXL: Standard Template Library for XXL Data Sets
R. Dementiev, L. Kettner and P. Sanders
Technical Report, 2005
R. Dementiev, L. Kettner and P. Sanders
Technical Report, 2005
Export
BibTeX
@techreport{Kettner2005StxxlReport,
TITLE = {{STXXL}: Standard Template Library for {XXL} Data Sets},
AUTHOR = {Dementiev, Roman and Kettner, Lutz and Sanders, Peter},
LANGUAGE = {eng},
NUMBER = {2005/18},
INSTITUTION = {Fakult{\"a}t f{\"u}r Informatik, University of Karlsruhe},
ADDRESS = {Karlsruhe, Germany},
YEAR = {2005},
DATE = {2005},
}
Endnote
%0 Report
%A Dementiev, Roman
%A Kettner, Lutz
%A Sanders, Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T STXXL: Standard Template Library for XXL Data Sets :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E689-4
%Y Fakultät für Informatik, University of Karlsruhe
%C Karlsruhe, Germany
%D 2005
An emperical model for heterogeneous translucent objects
C. Fuchs, M. Gösele, T. Chen and H.-P. Seidel
Technical Report, 2005
C. Fuchs, M. Gösele, T. Chen and H.-P. Seidel
Technical Report, 2005
Abstract
We introduce an empirical model for multiple scattering in heterogeneous
translucent objects for which classical approximations such as the
dipole approximation to the di usion equation are no longer valid.
Motivated by the exponential fall-o of scattered intensity with
distance, di use subsurface scattering is represented as a sum of
exponentials per surface point plus a modulation texture. Modeling
quality can be improved by using an anisotropic model where exponential
parameters are determined per surface location and scattering direction.
We validate the scattering model for a set of planar object samples
which were recorded under controlled conditions and quantify the
modeling error. Furthermore, several translucent objects with complex
geometry are captured and compared to the real object under similar
illumination conditions.
Export
BibTeX
@techreport{FuchsGoeseleChenSeidel,
TITLE = {An emperical model for heterogeneous translucent objects},
AUTHOR = {Fuchs, Christian and G{\"o}sele, Michael and Chen, Tongbo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-006},
NUMBER = {MPI-I-2005-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {We introduce an empirical model for multiple scattering in heterogeneous translucent objects for which classical approximations such as the dipole approximation to the di usion equation are no longer valid. Motivated by the exponential fall-o of scattered intensity with distance, di use subsurface scattering is represented as a sum of exponentials per surface point plus a modulation texture. Modeling quality can be improved by using an anisotropic model where exponential parameters are determined per surface location and scattering direction. We validate the scattering model for a set of planar object samples which were recorded under controlled conditions and quantify the modeling error. Furthermore, several translucent objects with complex geometry are captured and compared to the real object under similar illumination conditions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fuchs, Christian
%A Gösele, Michael
%A Chen, Tongbo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T An emperical model for heterogeneous translucent objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-682F-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 20 p.
%X We introduce an empirical model for multiple scattering in heterogeneous
translucent objects for which classical approximations such as the
dipole approximation to the di usion equation are no longer valid.
Motivated by the exponential fall-o of scattered intensity with
distance, di use subsurface scattering is represented as a sum of
exponentials per surface point plus a modulation texture. Modeling
quality can be improved by using an anisotropic model where exponential
parameters are determined per surface location and scattering direction.
We validate the scattering model for a set of planar object samples
which were recorded under controlled conditions and quantify the
modeling error. Furthermore, several translucent objects with complex
geometry are captured and compared to the real object under similar
illumination conditions.
%B Research Report / Max-Planck-Institut für Informatik
Reflectance from images: a model-based approach for human faces
M. Fuchs, V. Blanz, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2005
M. Fuchs, V. Blanz, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2005
Abstract
In this paper, we present an image-based framework that acquires the
reflectance properties of a human face. A range scan of the face is not
required.
Based on a morphable face model, the system estimates the 3D
shape, and establishes point-to-point correspondence across images taken
from different viewpoints, and across different individuals' faces.
This provides a common parameterization of all reconstructed surfaces
that can be used to compare and transfer BRDF data between different
faces. Shape estimation from images compensates deformations of the face
during the measurement process, such as facial expressions.
In the common parameterization, regions of homogeneous materials on the
face surface can be defined a-priori. We apply analytical BRDF models to
express the reflectance properties of each region, and we estimate their
parameters in a least-squares fit from the image data. For each of the
surface points, the diffuse component of the BRDF is locally refined,
which provides high detail.
We present results for multiple analytical BRDF models, rendered at
novelorientations and lighting conditions.
Export
BibTeX
@techreport{FuchsBlanzLenschSeidel2005,
TITLE = {Reflectance from images: a model-based approach for human faces},
AUTHOR = {Fuchs, Martin and Blanz, Volker and Lensch, Hendrik P. A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-001},
NUMBER = {MPI-I-2005-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a-priori. We apply analytical BRDF models to express the reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting conditions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fuchs, Martin
%A Blanz, Volker
%A Lensch, Hendrik P. A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Reflectance from images: a model-based approach for human faces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-683F-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 33 p.
%X In this paper, we present an image-based framework that acquires the
reflectance properties of a human face. A range scan of the face is not
required.
Based on a morphable face model, the system estimates the 3D
shape, and establishes point-to-point correspondence across images taken
from different viewpoints, and across different individuals' faces.
This provides a common parameterization of all reconstructed surfaces
that can be used to compare and transfer BRDF data between different
faces. Shape estimation from images compensates deformations of the face
during the measurement process, such as facial expressions.
In the common parameterization, regions of homogeneous materials on the
face surface can be defined a-priori. We apply analytical BRDF models to
express the reflectance properties of each region, and we estimate their
parameters in a least-squares fit from the image data. For each of the
surface points, the diffuse component of the BRDF is locally refined,
which provides high detail.
We present results for multiple analytical BRDF models, rendered at
novelorientations and lighting conditions.
%B Research Report / Max-Planck-Institut für Informatik
Cycle bases of graphs and sampled manifolds
C. Gotsman, K. Kaligosi, K. Mehlhorn, D. Michail and E. Pyrga
Technical Report, 2005
C. Gotsman, K. Kaligosi, K. Mehlhorn, D. Michail and E. Pyrga
Technical Report, 2005
Abstract
Point samples of a surface in $\R^3$ are the dominant output of a
multitude of 3D scanning devices. The usefulness of these devices rests on
being able to extract properties of the surface from the sample. We show that, under
certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of
the sample encodes topological information about the surface and yields bases for the
trivial and non-trivial loops of the surface. We validate our results by experiments.
Export
BibTeX
@techreport{,
TITLE = {Cycle bases of graphs and sampled manifolds},
AUTHOR = {Gotsman, Craig and Kaligosi, Kanela and Mehlhorn, Kurt and Michail, Dimitrios and Pyrga, Evangelia},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008},
NUMBER = {MPI-I-2005-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gotsman, Craig
%A Kaligosi, Kanela
%A Mehlhorn, Kurt
%A Michail, Dimitrios
%A Pyrga, Evangelia
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Cycle bases of graphs and sampled manifolds :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-684C-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 30 p.
%X Point samples of a surface in $\R^3$ are the dominant output of a
multitude of 3D scanning devices. The usefulness of these devices rests on
being able to extract properties of the surface from the sample. We show that, under
certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of
the sample encodes topological information about the surface and yields bases for the
trivial and non-trivial loops of the surface. We validate our results by experiments.
%B Research Report / Max-Planck-Institut für Informatik
Reachability substitutes for planar digraphs
I. Katriel, M. Kutz and M. Skutella
Technical Report, 2005
I. Katriel, M. Kutz and M. Skutella
Technical Report, 2005
Abstract
Given a digraph $G = (V,E)$ with a set $U$ of vertices marked
``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$
with $V' \supseteq U$ in such a way that the reachabilities amongst
those interesting vertices in $G$ and \RS{} are the same. So with
respect to the reachability relations within $U$, the digraph \RS{}
is a substitute for $G$.
We show that while almost all graphs do not have reachability
substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar
graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$.
Our result rests on two new structural results for planar
dags, a separation procedure and a reachability theorem, which
might be of independent interest.
Export
BibTeX
@techreport{,
TITLE = {Reachability substitutes for planar digraphs},
AUTHOR = {Katriel, Irit and Kutz, Martin and Skutella, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002},
NUMBER = {MPI-I-2005-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Kutz, Martin
%A Skutella, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Reachability substitutes for planar digraphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6859-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 24 p.
%X Given a digraph $G = (V,E)$ with a set $U$ of vertices marked
``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$
with $V' \supseteq U$ in such a way that the reachabilities amongst
those interesting vertices in $G$ and \RS{} are the same. So with
respect to the reachability relations within $U$, the digraph \RS{}
is a substitute for $G$.
We show that while almost all graphs do not have reachability
substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar
graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$.
Our result rests on two new structural results for planar
dags, a separation procedure and a reachability theorem, which
might be of independent interest.
%B Research Report / Max-Planck-Institut für Informatik
A faster algorithm for computing a longest common increasing subsequence
I. Katriel and M. Kutz
Technical Report, 2005
I. Katriel and M. Kutz
Technical Report, 2005
Abstract
Let $A=\langle a_1,\dots,a_n\rangle$ and
$B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$,
whose elements are drawn from a totally ordered set.
We present an algorithm that finds a longest
common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$
time and $O(m + n\ell)$ space, where $\ell$ is the length of the output.
A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space,
so ours is faster for a wide range of values of $m,n$ and $\ell$.
Export
BibTeX
@techreport{,
TITLE = {A faster algorithm for computing a longest common increasing subsequence},
AUTHOR = {Katriel, Irit and Kutz, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-007},
NUMBER = {MPI-I-2005-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Kutz, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A faster algorithm for computing a longest common increasing
subsequence :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-684F-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 13 p.
%X Let $A=\langle a_1,\dots,a_n\rangle$ and
$B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$,
whose elements are drawn from a totally ordered set.
We present an algorithm that finds a longest
common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$
time and $O(m + n\ell)$ space, where $\ell$ is the length of the output.
A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space,
so ours is faster for a wide range of values of $m,n$ and $\ell$.
%B Research Report / Max-Planck-Institut für Informatik
Photometric calibration of high dynamic range cameras
G. Krawczyk, M. Gösele and H.-P. Seidel
Technical Report, 2005
G. Krawczyk, M. Gösele and H.-P. Seidel
Technical Report, 2005
Export
BibTeX
@techreport{KrawczykGoeseleSeidel2005,
TITLE = {Photometric calibration of high dynamic range cameras},
AUTHOR = {Krawczyk, Grzegorz and G{\"o}sele, Michael and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-005},
NUMBER = {MPI-I-2005-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krawczyk, Grzegorz
%A Gösele, Michael
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Photometric calibration of high dynamic range cameras :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6834-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 21 p.
%B Research Report / Max-Planck-Institut für Informatik
Analysis and design of discrete normals and curvatures
T. Langer, A. Belyaev and H.-P. Seidel
Technical Report, 2005
T. Langer, A. Belyaev and H.-P. Seidel
Technical Report, 2005
Abstract
Accurate estimations of geometric properties of a surface (a curve) from
its discrete approximation are important for many computer graphics and
computer vision applications.
To assess and improve the quality of such an approximation we assume
that the
smooth surface (curve) is known in general form. Then we can represent the
surface (curve) by a Taylor series expansion
and compare its geometric properties with the corresponding discrete
approximations. In turn
we can either prove convergence of these approximations towards the true
properties
as the edge lengths tend to zero, or we can get hints how
to eliminate the error.
In this report we propose and study discrete schemes for estimating
the curvature and torsion of a smooth 3D curve approximated by a polyline.
Thereby we make some interesting findings about connections between
(smooth) classical curves
and certain estimation schemes for polylines.
Furthermore, we consider several popular schemes for estimating the
surface normal
of a dense triangle mesh interpolating a smooth surface,
and analyze their asymptotic properties.
Special attention is paid to the mean curvature vector, that
approximates both,
normal direction and mean curvature. We evaluate a common discrete
approximation and
show how asymptotic analysis can be used to improve it.
It turns out that the integral formulation of the mean curvature
\begin{equation*}
H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi,
\end{equation*}
can be computed by an exact quadrature formula.
The same is true for the integral formulations of Gaussian curvature and
the Taubin tensor.
The exact quadratures are then used to obtain reliable estimates
of the curvature tensor of a smooth surface approximated by a dense triangle
mesh. The proposed method is fast and often demonstrates a better
performance
than conventional curvature tensor estimation approaches. We also show
that the curvature tensor approximated by
our approach converges towards the true curvature tensor as the edge
lengths tend to zero.
Export
BibTeX
@techreport{LangerBelyaevSeidel2005,
TITLE = {Analysis and design of discrete normals and curvatures},
AUTHOR = {Langer, Torsten and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-003},
NUMBER = {MPI-I-2005-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Accurate estimations of geometric properties of a surface (a curve) from its discrete approximation are important for many computer graphics and computer vision applications. To assess and improve the quality of such an approximation we assume that the smooth surface (curve) is known in general form. Then we can represent the surface (curve) by a Taylor series expansion and compare its geometric properties with the corresponding discrete approximations. In turn we can either prove convergence of these approximations towards the true properties as the edge lengths tend to zero, or we can get hints how to eliminate the error. In this report we propose and study discrete schemes for estimating the curvature and torsion of a smooth 3D curve approximated by a polyline. Thereby we make some interesting findings about connections between (smooth) classical curves and certain estimation schemes for polylines. Furthermore, we consider several popular schemes for estimating the surface normal of a dense triangle mesh interpolating a smooth surface, and analyze their asymptotic properties. Special attention is paid to the mean curvature vector, that approximates both, normal direction and mean curvature. We evaluate a common discrete approximation and show how asymptotic analysis can be used to improve it. It turns out that the integral formulation of the mean curvature \begin{equation*} H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi, \end{equation*} can be computed by an exact quadrature formula. The same is true for the integral formulations of Gaussian curvature and the Taubin tensor. The exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface approximated by a dense triangle mesh. The proposed method is fast and often demonstrates a better performance than conventional curvature tensor estimation approaches. We also show that the curvature tensor approximated by our approach converges towards the true curvature tensor as the edge lengths tend to zero.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Langer, Torsten
%A Belyaev, Alexander
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Analysis and design of discrete normals and curvatures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6837-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 42 p.
%X Accurate estimations of geometric properties of a surface (a curve) from
its discrete approximation are important for many computer graphics and
computer vision applications.
To assess and improve the quality of such an approximation we assume
that the
smooth surface (curve) is known in general form. Then we can represent the
surface (curve) by a Taylor series expansion
and compare its geometric properties with the corresponding discrete
approximations. In turn
we can either prove convergence of these approximations towards the true
properties
as the edge lengths tend to zero, or we can get hints how
to eliminate the error.
In this report we propose and study discrete schemes for estimating
the curvature and torsion of a smooth 3D curve approximated by a polyline.
Thereby we make some interesting findings about connections between
(smooth) classical curves
and certain estimation schemes for polylines.
Furthermore, we consider several popular schemes for estimating the
surface normal
of a dense triangle mesh interpolating a smooth surface,
and analyze their asymptotic properties.
Special attention is paid to the mean curvature vector, that
approximates both,
normal direction and mean curvature. We evaluate a common discrete
approximation and
show how asymptotic analysis can be used to improve it.
It turns out that the integral formulation of the mean curvature
\begin{equation*}
H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi,
\end{equation*}
can be computed by an exact quadrature formula.
The same is true for the integral formulations of Gaussian curvature and
the Taubin tensor.
The exact quadratures are then used to obtain reliable estimates
of the curvature tensor of a smooth surface approximated by a dense triangle
mesh. The proposed method is fast and often demonstrates a better
performance
than conventional curvature tensor estimation approaches. We also show
that the curvature tensor approximated by
our approach converges towards the true curvature tensor as the edge
lengths tend to zero.
%B Research Report / Max-Planck-Institut für Informatik
Rank-maximal through maximum weight matchings
D. Michail
Technical Report, 2005
D. Michail
Technical Report, 2005
Abstract
Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$
where $|V|=n, |E|=m$ and a partition of the edge set into
$r \le m$ disjoint subsets $E = E_1 \disjointcup E_2
\disjointcup \dots \disjointcup E_r$, which are called ranks,
the {\em rank-maximal matching} problem is to find a matching $M$
of $G$ such that $|M \cap E_1|$ is maximized and given that
$|M \cap E_2|$, and so on. Such a problem arises as an optimization
criteria over a possible assignment of a set of applicants to a
set of posts. The matching represents the assignment and the
ranks on the edges correspond to a ranking on the posts submitted
by the applicants.
The rank-maximal matching problem has been previously
studied where a $O( r \sqrt n m )$ time and linear
space algorithm~\cite{IKMMP} was
presented. In this paper we present a new simpler algorithm which
matches the running time and space complexity of the above
algorithm.
The new algorithm is based on a different approach,
by exploiting that the rank-maximal matching problem can
be reduced to a maximum weight matching problem where the
weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$.
By exploiting that these edge weights are steeply distributed
we design a scaling algorithm which scales by a factor of
$n$ in each phase. We also show that in each phase one
maximum cardinality computation is sufficient to get a new
optimal solution.
This algorithm answers an open question raised on the same
paper on whether the reduction to the maximum-weight matching
problem can help us derive an efficient algorithm.
Export
BibTeX
@techreport{,
TITLE = {Rank-maximal through maximum weight matchings},
AUTHOR = {Michail, Dimitrios},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001},
NUMBER = {MPI-I-2005-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Michail, Dimitrios
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Rank-maximal through maximum weight matchings :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-685C-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 22 p.
%X Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$
where $|V|=n, |E|=m$ and a partition of the edge set into
$r \le m$ disjoint subsets $E = E_1 \disjointcup E_2
\disjointcup \dots \disjointcup E_r$, which are called ranks,
the {\em rank-maximal matching} problem is to find a matching $M$
of $G$ such that $|M \cap E_1|$ is maximized and given that
$|M \cap E_2|$, and so on. Such a problem arises as an optimization
criteria over a possible assignment of a set of applicants to a
set of posts. The matching represents the assignment and the
ranks on the edges correspond to a ranking on the posts submitted
by the applicants.
The rank-maximal matching problem has been previously
studied where a $O( r \sqrt n m )$ time and linear
space algorithm~\cite{IKMMP} was
presented. In this paper we present a new simpler algorithm which
matches the running time and space complexity of the above
algorithm.
The new algorithm is based on a different approach,
by exploiting that the rank-maximal matching problem can
be reduced to a maximum weight matching problem where the
weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$.
By exploiting that these edge weights are steeply distributed
we design a scaling algorithm which scales by a factor of
$n$ in each phase. We also show that in each phase one
maximum cardinality computation is sufficient to get a new
optimal solution.
This algorithm answers an open question raised on the same
paper on whether the reduction to the maximum-weight matching
problem can help us derive an efficient algorithm.
%B Research Report / Max-Planck-Institut für Informatik
Sparse meshing of uncertain and noisy surface scattered data
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2005
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2005
Abstract
In this paper, we develop a method for generating
a high-quality approximation of a noisy set of points sampled
from a smooth surface by a sparse triangle mesh. The main
idea of the method consists of defining an appropriate set
of approximation centers and use them as the vertices
of a mesh approximating given scattered data.
To choose the approximation centers, a clustering
procedure is used. With every point of the input data
we associate a local uncertainty
measure which is used to estimate the importance of
the point contribution to the reconstructed surface.
Then a global uncertainty measure is constructed from local ones.
The approximation centers are chosen as the points where
the global uncertainty measure attains its local minima.
It allows us to achieve a high-quality approximation of uncertain and
noisy point data by a sparse mesh. An interesting feature of our
approach
is that the uncertainty measures take into account the normal
directions
estimated at the scattered points.
In particular it results in accurate reconstruction of high-curvature
regions.
Export
BibTeX
@techreport{SchallBelyaevSeidel2005,
TITLE = {Sparse meshing of uncertain and noisy surface scattered data},
AUTHOR = {Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-002},
NUMBER = {MPI-I-2005-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {In this paper, we develop a method for generating a high-quality approximation of a noisy set of points sampled from a smooth surface by a sparse triangle mesh. The main idea of the method consists of defining an appropriate set of approximation centers and use them as the vertices of a mesh approximating given scattered data. To choose the approximation centers, a clustering procedure is used. With every point of the input data we associate a local uncertainty measure which is used to estimate the importance of the point contribution to the reconstructed surface. Then a global uncertainty measure is constructed from local ones. The approximation centers are chosen as the points where the global uncertainty measure attains its local minima. It allows us to achieve a high-quality approximation of uncertain and noisy point data by a sparse mesh. An interesting feature of our approach is that the uncertainty measures take into account the normal directions estimated at the scattered points. In particular it results in accurate reconstruction of high-curvature regions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schall, Oliver
%A Belyaev, Alexander
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Sparse meshing of uncertain and noisy surface scattered data :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-683C-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 20 p.
%X In this paper, we develop a method for generating
a high-quality approximation of a noisy set of points sampled
from a smooth surface by a sparse triangle mesh. The main
idea of the method consists of defining an appropriate set
of approximation centers and use them as the vertices
of a mesh approximating given scattered data.
To choose the approximation centers, a clustering
procedure is used. With every point of the input data
we associate a local uncertainty
measure which is used to estimate the importance of
the point contribution to the reconstructed surface.
Then a global uncertainty measure is constructed from local ones.
The approximation centers are chosen as the points where
the global uncertainty measure attains its local minima.
It allows us to achieve a high-quality approximation of uncertain and
noisy point data by a sparse mesh. An interesting feature of our
approach
is that the uncertainty measures take into account the normal
directions
estimated at the scattered points.
In particular it results in accurate reconstruction of high-curvature
regions.
%B Research Report / Max-Planck-Institut für Informatik
Automated retraining methods for document classification and their parameter tuning
S. Siersdorfer and G. Weikum
Technical Report, 2005
S. Siersdorfer and G. Weikum
Technical Report, 2005
Abstract
This paper addresses the problem of semi-supervised classification on
document collections using retraining (also called self-training). A
possible application is focused Web
crawling which may start with very few, manually selected, training
documents
but can be enhanced by automatically adding initially unlabeled,
positively classified Web pages for retraining.
Such an approach is by itself not robust and faces tuning problems
regarding parameters
like the number of selected documents, the number of retraining
iterations, and the ratio of positive
and negative classified samples used for retraining.
The paper develops methods for automatically tuning these parameters,
based on
predicting the leave-one-out error for a re-trained classifier and
avoiding that the classifier is diluted by selecting too many or weak
documents for retraining.
Our experiments
with three different datasets
confirm the practical viability of the approach.
Export
BibTeX
@techreport{SiersdorferWeikum2005,
TITLE = {Automated retraining methods for document classification and their parameter tuning},
AUTHOR = {Siersdorfer, Stefan and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-5-002},
NUMBER = {MPI-I-2005-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {This paper addresses the problem of semi-supervised classification on document collections using retraining (also called self-training). A possible application is focused Web crawling which may start with very few, manually selected, training documents but can be enhanced by automatically adding initially unlabeled, positively classified Web pages for retraining. Such an approach is by itself not robust and faces tuning problems regarding parameters like the number of selected documents, the number of retraining iterations, and the ratio of positive and negative classified samples used for retraining. The paper develops methods for automatically tuning these parameters, based on predicting the leave-one-out error for a re-trained classifier and avoiding that the classifier is diluted by selecting too many or weak documents for retraining. Our experiments with three different datasets confirm the practical viability of the approach.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Siersdorfer, Stefan
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Automated retraining methods for document classification and their parameter tuning :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6823-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 23 p.
%X This paper addresses the problem of semi-supervised classification on
document collections using retraining (also called self-training). A
possible application is focused Web
crawling which may start with very few, manually selected, training
documents
but can be enhanced by automatically adding initially unlabeled,
positively classified Web pages for retraining.
Such an approach is by itself not robust and faces tuning problems
regarding parameters
like the number of selected documents, the number of retraining
iterations, and the ratio of positive
and negative classified samples used for retraining.
The paper develops methods for automatically tuning these parameters,
based on
predicting the leave-one-out error for a re-trained classifier and
avoiding that the classifier is diluted by selecting too many or weak
documents for retraining.
Our experiments
with three different datasets
confirm the practical viability of the approach.
%B Research Report / Max-Planck-Institut für Informatik
Joint Motion and Reflectance Capture for Creating Relightable 3D Videos
C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor and H.-P. Seidel
Technical Report, 2005
C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor and H.-P. Seidel
Technical Report, 2005
Abstract
\begin{abstract}
Passive optical motion capture is able to provide
authentically animated, photo-realistically and view-dependently
textured models of real people.
To import real-world characters into virtual environments, however,
also surface reflectance properties must be known.
We describe a video-based modeling approach that captures human
motion as well as reflectance characteristics from a handful of
synchronized video recordings.
The presented method is able to recover spatially varying
reflectance properties of clothes % dynamic objects ?
by exploiting the time-varying orientation of each surface point
with respect to camera and light direction.
The resulting model description enables us to match animated subject
appearance to different lighting conditions, as well as to
interchange surface attributes among different people, e.g. for
virtual dressing.
Our contribution allows populating virtual worlds with correctly relit,
real-world people.\\
\end{abstract}
Export
BibTeX
@techreport{TheobaltTR2005,
TITLE = {Joint Motion and Reflectance Capture for Creating Relightable {3D} Videos},
AUTHOR = {Theobalt, Christian and Ahmed, Naveed and de Aguiar, Edilson and Ziegler, Gernot and Lensch, Hendrik and Magnor, Marcus and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2005-4-004},
LOCALID = {Local-ID: C1256BDE005F57A8-5B757D3AA9584EEBC12570A7003C813D-TheobaltTR2005},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {\begin{abstract} Passive optical motion capture is able to provide authentically animated, photo-realistically and view-dependently textured models of real people. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We describe a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}},
}
Endnote
%0 Report
%A Theobalt, Christian
%A Ahmed, Naveed
%A de Aguiar, Edilson
%A Ziegler, Gernot
%A Lensch, Hendrik
%A Magnor, Marcus
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Programming Logics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Joint Motion and Reflectance Capture for Creating Relightable 3D Videos :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2879-B
%F EDOC: 520731
%F OTHER: Local-ID: C1256BDE005F57A8-5B757D3AA9584EEBC12570A7003C813D-TheobaltTR2005
%D 2005
%X \begin{abstract}
Passive optical motion capture is able to provide
authentically animated, photo-realistically and view-dependently
textured models of real people.
To import real-world characters into virtual environments, however,
also surface reflectance properties must be known.
We describe a video-based modeling approach that captures human
motion as well as reflectance characteristics from a handful of
synchronized video recordings.
The presented method is able to recover spatially varying
reflectance properties of clothes % dynamic objects ?
by exploiting the time-varying orientation of each surface point
with respect to camera and light direction.
The resulting model description enables us to match animated subject
appearance to different lighting conditions, as well as to
interchange surface attributes among different people, e.g. for
virtual dressing.
Our contribution allows populating virtual worlds with correctly relit,
real-world people.\\
\end{abstract}
2004
Filtering algorithms for the Same and UsedBy constraints
N. Beldiceanu, I. Katriel and S. Thiel
Technical Report, 2004
N. Beldiceanu, I. Katriel and S. Thiel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Filtering algorithms for the Same and {UsedBy} constraints},
AUTHOR = {Beldiceanu, Nicolas and Katriel, Irit and Thiel, Sven},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-01},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Beldiceanu, Nicolas
%A Katriel, Irit
%A Thiel, Sven
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Filtering algorithms for the Same and UsedBy constraints :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-290C-C
%F EDOC: 237881
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 33 p.
%B Research Report
EXACUS : Efficient and Exact Algorithms for Curves and Surfaces
E. Berberich, A. Eigenwillig, M. Hemmer, S. Hert, L. Kettner, K. Mehlhorn, J. Reichel, S. Schmitt, E. Schömer, D. Weber and N. Wolpert
Technical Report, 2004
E. Berberich, A. Eigenwillig, M. Hemmer, S. Hert, L. Kettner, K. Mehlhorn, J. Reichel, S. Schmitt, E. Schömer, D. Weber and N. Wolpert
Technical Report, 2004
Export
BibTeX
@techreport{Berberich_ECG-TR-361200-02,
TITLE = {{EXACUS} : Efficient and Exact Algorithms for Curves and Surfaces},
AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Hemmer, Michael and Hert, Susan and Kettner, Lutz and Mehlhorn, Kurt and Reichel, Joachim and Schmitt, Susanne and Sch{\"o}mer, Elmar and Weber, Dennis and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ECG-TR-361200-02},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Berberich, Eric
%A Eigenwillig, Arno
%A Hemmer, Michael
%A Hert, Susan
%A Kettner, Lutz
%A Mehlhorn, Kurt
%A Reichel, Joachim
%A Schmitt, Susanne
%A Schömer, Elmar
%A Weber, Dennis
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T EXACUS : Efficient and Exact Algorithms for Curves and Surfaces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B89-6
%F EDOC: 237751
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 8 p.
%B ECG Technical Report
An empirical comparison of software for constructing arrangements of curved arcs
E. Berberich, A. Eigenwillig, I. Emiris, E. Fogel, M. Hemmer, D. Halperin, A. Kakargias, L. Kettner, K. Mehlhorn, S. Pion, E. Schömer, M. Teillaud, R. Wein and N. Wolpert
Technical Report, 2004
E. Berberich, A. Eigenwillig, I. Emiris, E. Fogel, M. Hemmer, D. Halperin, A. Kakargias, L. Kettner, K. Mehlhorn, S. Pion, E. Schömer, M. Teillaud, R. Wein and N. Wolpert
Technical Report, 2004
Export
BibTeX
@techreport{Berberich_ECG-TR-361200-01,
TITLE = {An empirical comparison of software for constructing arrangements of curved arcs},
AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Emiris, Ioannis and Fogel, Efraim and Hemmer, Michael and Halperin, Dan and Kakargias, Athanasios and Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Sch{\"o}mer, Elmar and Teillaud, Monique and Wein, Ron and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ECG-TR-361200-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Berberich, Eric
%A Eigenwillig, Arno
%A Emiris, Ioannis
%A Fogel, Efraim
%A Hemmer, Michael
%A Halperin, Dan
%A Kakargias, Athanasios
%A Kettner, Lutz
%A Mehlhorn, Kurt
%A Pion, Sylvain
%A Schömer, Elmar
%A Teillaud, Monique
%A Wein, Ron
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An empirical comparison of software for constructing arrangements of curved arcs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B87-A
%F EDOC: 237743
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 11 p.
%B ECG Technical Report
On the Hadwiger’s Conjecture for Graphs Products
L. S. Chandran and N. Sivadasan
Technical Report, 2004a
L. S. Chandran and N. Sivadasan
Technical Report, 2004a
Export
BibTeX
@techreport{TR2004,
TITLE = {On the {Hadwiger's} Conjecture for Graphs Products},
AUTHOR = {Chandran, L. Sunil and Sivadasan, N.},
LANGUAGE = {eng},
ISBN = {0946-011X},
NUMBER = {MPI-I-2004-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2004},
DATE = {2004},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Chandran, L. Sunil
%A Sivadasan, N.
%+ Discrete Optimization, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Hadwiger's Conjecture for Graphs Products :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-001A-0C8F-A
%@ 0946-011X
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2004
%B Research Report
On the Hadwiger’s conjecture for graph products
L. S. Chandran and N. Sivadasan
Technical Report, 2004b
L. S. Chandran and N. Sivadasan
Technical Report, 2004b
Export
BibTeX
@techreport{,
TITLE = {On the Hadwiger's conjecture for graph products},
AUTHOR = {Chandran, L. Sunil and Sivadasan, Naveen},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Chandran, L. Sunil
%A Sivadasan, Naveen
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Hadwiger's conjecture for graph products :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2BA6-4
%F EDOC: 241593
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 10 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Faster ray tracing with SIMD shaft culling
K. Dmitriev, V. Havran and H.-P. Seidel
Technical Report, 2004
K. Dmitriev, V. Havran and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Faster ray tracing with {SIMD} shaft culling},
AUTHOR = {Dmitriev, Kirill and Havran, Vlastimil and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-12},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Dmitriev, Kirill
%A Havran, Vlastimil
%A Seidel, Hans-Peter
%+ Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Faster ray tracing with SIMD shaft culling :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28BB-A
%F EDOC: 237860
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 13 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
The LEDA class real number - extended version
S. Funke, K. Mehlhorn, S. Schmitt, C. Burnikel, R. Fleischer and S. Schirra
Technical Report, 2004
S. Funke, K. Mehlhorn, S. Schmitt, C. Burnikel, R. Fleischer and S. Schirra
Technical Report, 2004
Export
BibTeX
@techreport{Funke_ECG-TR-363110-01,
TITLE = {The {LEDA} class real number -- extended version},
AUTHOR = {Funke, Stefan and Mehlhorn, Kurt and Schmitt, Susanne and Burnikel, Christoph and Fleischer, Rudolf and Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363110-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Funke, Stefan
%A Mehlhorn, Kurt
%A Schmitt, Susanne
%A Burnikel, Christoph
%A Fleischer, Rudolf
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The LEDA class real number - extended version :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B8C-F
%F EDOC: 237780
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 2 p.
%B ECG Technical Report
Modeling hair using a wisp hair model
J. Haber, C. Schmitt, M. Koster and H.-P. Seidel
Technical Report, 2004
J. Haber, C. Schmitt, M. Koster and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Modeling hair using a wisp hair model},
AUTHOR = {Haber, J{\"o}rg and Schmitt, Carina and Koster, Martin and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-05},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Haber, Jörg
%A Schmitt, Carina
%A Koster, Martin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Modeling hair using a wisp hair model :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28F6-4
%F EDOC: 237864
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 38 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Effects of a modular filter on geometric applications
M. Hemmer, L. Kettner and E. Schömer
Technical Report, 2004
M. Hemmer, L. Kettner and E. Schömer
Technical Report, 2004
Export
BibTeX
@techreport{Hemmer_ECG-TR-363111-01,
TITLE = {Effects of a modular filter on geometric applications},
AUTHOR = {Hemmer, Michael and Kettner, Lutz and Sch{\"o}mer, Elmar},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363111-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Hemmer, Michael
%A Kettner, Lutz
%A Schömer, Elmar
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Effects of a modular filter on geometric applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B8F-9
%F EDOC: 237782
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 7 p.
%B ECG Technical Report
Neural meshes: surface reconstruction with a learning algorithm
I. Ivrissimtzis, W.-K. Jeong, S. Lee, Y. Lee and H.-P. Seidel
Technical Report, 2004
I. Ivrissimtzis, W.-K. Jeong, S. Lee, Y. Lee and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Neural meshes: surface reconstruction with a learning algorithm},
AUTHOR = {Ivrissimtzis, Ioannis and Jeong, Won-Ki and Lee, Seungyong and Lee, Yunjin and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-10},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Ivrissimtzis, Ioannis
%A Jeong, Won-Ki
%A Lee, Seungyong
%A Lee, Yunjin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Neural meshes: surface reconstruction with a learning algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28C9-A
%F EDOC: 237862
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 16 p.
%B Research Report
On algorithms for online topological ordering and sorting
I. Katriel
Technical Report, 2004
I. Katriel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {On algorithms for online topological ordering and sorting},
AUTHOR = {Katriel, Irit},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-02},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Katriel, Irit
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On algorithms for online topological ordering and sorting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2906-7
%F EDOC: 237878
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 12 p.
%B Research Report
Classroom examples of robustness problems in geometric computations
L. Kettner, K. Mehlhorn, S. Pion, S. Schirra and C. Yap
Technical Report, 2004
L. Kettner, K. Mehlhorn, S. Pion, S. Schirra and C. Yap
Technical Report, 2004
Export
BibTeX
@techreport{Kettner_ECG-TR-363100-01,
TITLE = {Classroom examples of robustness problems in geometric computations},
AUTHOR = {Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Schirra, Stefan and Yap, Chee},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363100-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
VOLUME = {3221},
}
Endnote
%0 Report
%A Kettner, Lutz
%A Mehlhorn, Kurt
%A Pion, Sylvain
%A Schirra, Stefan
%A Yap, Chee
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Classroom examples of robustness problems in geometric computations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B92-0
%F EDOC: 237797
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 12 p.
%B ECG Technical Report
%N 3221
A fast root checking algorithm
C. Klein
Technical Report, 2004
C. Klein
Technical Report, 2004
Export
BibTeX
@techreport{Klein_ECG-TR-363109-02,
TITLE = {A fast root checking algorithm},
AUTHOR = {Klein, Christian},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363109-02},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective computational geometry for curves and surfaces}},
}
Endnote
%0 Report
%A Klein, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A fast root checking algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B96-8
%F EDOC: 237826
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 11 p.
%B ECG Technical Report
New bounds for the Descartes method
W. Krandick and K. Mehlhorn
Technical Report, 2004
W. Krandick and K. Mehlhorn
Technical Report, 2004
Export
BibTeX
@techreport{Krandick_DU-CS-04-04,
TITLE = {New bounds for the Descartes method},
AUTHOR = {Krandick, Werner and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {DU-CS-04-04},
INSTITUTION = {Drexel University},
ADDRESS = {Philadelphia, Pa.},
YEAR = {2004},
DATE = {2004},
TYPE = {Drexel University / Department of Computer Science:Technical Report},
EDITOR = {{Drexel University {\textless}Philadelphia, Pa.{\textgreater} / Department of Computer Science}},
}
Endnote
%0 Report
%A Krandick, Werner
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New bounds for the Descartes method :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B99-2
%F EDOC: 237829
%Y Drexel University
%C Philadelphia, Pa.
%D 2004
%P 18 p.
%B Drexel University / Department of Computer Science:Technical Report
A simpler linear time 2/3-epsilon approximation
P. Sanders and S. Pettie
Technical Report, 2004a
P. Sanders and S. Pettie
Technical Report, 2004a
Export
BibTeX
@techreport{,
TITLE = {A simpler linear time 2/3-epsilon approximation},
AUTHOR = {Sanders, Peter and Pettie, Seth},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-01},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Sanders, Peter
%A Pettie, Seth
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simpler linear time 2/3-epsilon approximation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2909-1
%F EDOC: 237880
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 7 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
A simpler linear time 2/3 - epsilon approximation for maximum weight matching
P. Sanders and S. Pettie
Technical Report, 2004b
P. Sanders and S. Pettie
Technical Report, 2004b
Abstract
We present two $\twothirds - \epsilon$ approximation algorithms for the
maximum weight matching problem that run in time
$O(m\log\frac{1}{\epsilon})$. We give a simple and practical
randomized algorithm and a somewhat more complicated deterministic
algorithm. Both algorithms are exponentially faster in
terms of $\epsilon$ than a recent algorithm by Drake and Hougardy.
We also show that our algorithms can be generalized to find a
$1-\epsilon$ approximatation to the maximum weight matching, for any
$\epsilon>0$.
Export
BibTeX
@techreport{,
TITLE = {A simpler linear time 2/3 -- epsilon approximation for maximum weight matching},
AUTHOR = {Sanders, Peter and Pettie, Seth},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002},
NUMBER = {MPI-I-2004-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004},
ABSTRACT = {We present two $\twothirds -- \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%A Pettie, Seth
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simpler linear time 2/3 - epsilon approximation for maximum weight matching :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6862-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 10 p.
%X We present two $\twothirds - \epsilon$ approximation algorithms for the
maximum weight matching problem that run in time
$O(m\log\frac{1}{\epsilon})$. We give a simple and practical
randomized algorithm and a somewhat more complicated deterministic
algorithm. Both algorithms are exponentially faster in
terms of $\epsilon$ than a recent algorithm by Drake and Hougardy.
We also show that our algorithms can be generalized to find a
$1-\epsilon$ approximatation to the maximum weight matching, for any
$\epsilon>0$.
%B Research Report / Max-Planck-Institut für Informatik
Common subexpression search in LEDA_reals : a study of the diamond-operator
S. Schmitt
Technical Report, 2004a
S. Schmitt
Technical Report, 2004a
Export
BibTeX
@techreport{Schmitt_ECG-TR-363109-01,
TITLE = {Common subexpression search in {LEDA}{\textunderscore}reals : a study of the diamond-operator},
AUTHOR = {Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363109-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Common subexpression search in LEDA_reals : a study of the diamond-operator :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B9C-B
%F EDOC: 237830
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 5 p.
%B ECG Technical Report
Improved separation bounds for the diamond operator
S. Schmitt
Technical Report, 2004b
S. Schmitt
Technical Report, 2004b
Export
BibTeX
@techreport{Schmitt_ECG-TR-363108-01,
TITLE = {Improved separation bounds for the diamond operator},
AUTHOR = {Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363108-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Techical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Improved separation bounds for the diamond operator :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B9F-5
%F EDOC: 237831
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 13 p.
%B ECG Techical Report
A comparison of polynomial evaluation schemes
S. Schmitt and L. Fousse
Technical Report, 2004
S. Schmitt and L. Fousse
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {A comparison of polynomial evaluation schemes},
AUTHOR = {Schmitt, Susanne and Fousse, Laurent},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-06},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {Becker and {Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Schmitt, Susanne
%A Fousse, Laurent
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A comparison of polynomial evaluation schemes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28EC-B
%F EDOC: 237875
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 16 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Goal-oriented methods and meta methods for document classification and their parameter tuning
S. Siersdorfer, S. Sizov and G. Weikum
Technical Report, 2004
S. Siersdorfer, S. Sizov and G. Weikum
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Goal-oriented methods and meta methods for document classification and their parameter tuning},
AUTHOR = {Siersdorfer, Stefan and Sizov, Sergej and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-05},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Siersdorfer, Stefan
%A Sizov, Sergej
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Goal-oriented methods and meta methods for document classification and their parameter tuning :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28F3-A
%F EDOC: 237842
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 36 p.
%B Research Report
On scheduling with bounded migration
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004a
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004a
Export
BibTeX
@techreport{,
TITLE = {On scheduling with bounded migration},
AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-05},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Sivadasan, Naveen
%A Sanders, Peter
%A Skutella, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On scheduling with bounded migration :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28F9-D
%F EDOC: 237877
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 22 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Online scheduling with bounded migration
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004b
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004b
Export
BibTeX
@techreport{,
TITLE = {Online scheduling with bounded migration},
AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004},
NUMBER = {MPI-I-2004-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sivadasan, Naveen
%A Sanders, Peter
%A Skutella, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Online scheduling with bounded migration :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-685F-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 21 p.
%B Research Report / Max-Planck-Institut für Informatik
r-Adaptive parameterization of surfaces
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2004
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {r-Adaptive parameterization of surfaces},
AUTHOR = {Zayer, Rhaleb and R{\"o}ssl, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-06},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Zayer, Rhaleb
%A Rössl, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T r-Adaptive parameterization of surfaces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28E9-2
%F EDOC: 237863
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 10 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
2003
Improving linear programming approaches for the Steiner tree problem
E. Althaus, T. Polzin and S. Daneshmand
Technical Report, 2003
E. Althaus, T. Polzin and S. Daneshmand
Technical Report, 2003
Abstract
We present two theoretically interesting and empirically successful
techniques for improving the linear programming approaches, namely
graph transformation and local cuts, in the context of the
Steiner problem. We show the impact of these techniques on the
solution of the largest benchmark instances ever solved.
Export
BibTeX
@techreport{MPI-I-2003-1-004,
TITLE = {Improving linear programming approaches for the Steiner tree problem},
AUTHOR = {Althaus, Ernst and Polzin, Tobias and Daneshmand, Siavash},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Polzin, Tobias
%A Daneshmand, Siavash
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Improving linear programming approaches for the Steiner tree problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BB9-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 19 p.
%X We present two theoretically interesting and empirically successful
techniques for improving the linear programming approaches, namely
graph transformation and local cuts, in the context of the
Steiner problem. We show the impact of these techniques on the
solution of the largest benchmark instances ever solved.
%B Research Report / Max-Planck-Institut für Informatik
Random knapsack in expected polynomial time
R. Beier and B. Vöcking
Technical Report, 2003
R. Beier and B. Vöcking
Technical Report, 2003
Abstract
In this paper, we present the first average-case analysis proving an expected
polynomial running time for an exact algorithm for the 0/1 knapsack problem.
In particular, we prove, for various input distributions, that the number of
{\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings)
to this problem is polynomially bounded in the number of available items.
An algorithm by Nemhauser and Ullmann can enumerate these solutions very
efficiently so that a polynomial upper bound on the number of dominating
solutions implies an algorithm with expected polynomial running time.
The random input model underlying our analysis is very general
and not restricted to a particular input distribution. We assume adversarial
weights and randomly drawn profits (or vice versa). Our analysis covers
general probability
distributions with finite mean, and, in its most general form, can even
handle different probability distributions for the profits of different items.
This feature enables us to study the effects of correlations between profits
and weights. Our analysis confirms and explains practical studies showing
that so-called strongly correlated instances are harder to solve than
weakly correlated ones.
Export
BibTeX
@techreport{,
TITLE = {Random knapsack in expected polynomial time},
AUTHOR = {Beier, Ren{\'e} and V{\"o}cking, Berthold},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003},
NUMBER = {MPI-I-2003-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Beier, René
%A Vöcking, Berthold
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Random knapsack in expected polynomial time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BBC-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 22 p.
%X In this paper, we present the first average-case analysis proving an expected
polynomial running time for an exact algorithm for the 0/1 knapsack problem.
In particular, we prove, for various input distributions, that the number of
{\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings)
to this problem is polynomially bounded in the number of available items.
An algorithm by Nemhauser and Ullmann can enumerate these solutions very
efficiently so that a polynomial upper bound on the number of dominating
solutions implies an algorithm with expected polynomial running time.
The random input model underlying our analysis is very general
and not restricted to a particular input distribution. We assume adversarial
weights and randomly drawn profits (or vice versa). Our analysis covers
general probability
distributions with finite mean, and, in its most general form, can even
handle different probability distributions for the profits of different items.
This feature enables us to study the effects of correlations between profits
and weights. Our analysis confirms and explains practical studies showing
that so-called strongly correlated instances are harder to solve than
weakly correlated ones.
%B Research Report / Max-Planck-Institut für Informatik
A custom designed density estimation method for light transport
P. Bekaert, P. Slusallek, R. Cools, V. Havran and H.-P. Seidel
Technical Report, 2003
P. Bekaert, P. Slusallek, R. Cools, V. Havran and H.-P. Seidel
Technical Report, 2003
Abstract
We present a new Monte Carlo method for solving the global illumination
problem in environments with general geometry descriptions and
light emission and scattering properties. Current
Monte Carlo global illumination algorithms are based
on generic density estimation techniques that do not take into account any
knowledge about the nature of the data points --- light and potential
particle hit points --- from which a global illumination solution is to be
reconstructed. We propose a novel estimator, especially designed
for solving linear integral equations such as the rendering equation.
The resulting single-pass global illumination algorithm promises to
combine the flexibility and robustness of bi-directional
path tracing with the efficiency of algorithms such as photon mapping.
Export
BibTeX
@techreport{BekaertSlusallekCoolsHavranSeidel,
TITLE = {A custom designed density estimation method for light transport},
AUTHOR = {Bekaert, Philippe and Slusallek, Philipp and Cools, Ronald and Havran, Vlastimil and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-004},
NUMBER = {MPI-I-2003-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bekaert, Philippe
%A Slusallek, Philipp
%A Cools, Ronald
%A Havran, Vlastimil
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Cluster of Excellence Multimodal Computing and Interaction
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A custom designed density estimation method for light transport :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6922-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 28 p.
%X We present a new Monte Carlo method for solving the global illumination
problem in environments with general geometry descriptions and
light emission and scattering properties. Current
Monte Carlo global illumination algorithms are based
on generic density estimation techniques that do not take into account any
knowledge about the nature of the data points --- light and potential
particle hit points --- from which a global illumination solution is to be
reconstructed. We propose a novel estimator, especially designed
for solving linear integral equations such as the rendering equation.
The resulting single-pass global illumination algorithm promises to
combine the flexibility and robustness of bi-directional
path tracing with the efficiency of algorithms such as photon mapping.
%B Research Report / Max-Planck-Institut für Informatik
Girth and treewidth
S. Chandran Leela and C. R. Subramanian
Technical Report, 2003
S. Chandran Leela and C. R. Subramanian
Technical Report, 2003
Export
BibTeX
@techreport{,
TITLE = {Girth and treewidth},
AUTHOR = {Chandran Leela, Sunil and Subramanian, C. R.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001},
NUMBER = {MPI-I-2003-NWG2-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chandran Leela, Sunil
%A Subramanian, C. R.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Girth and treewidth :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6868-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 11 p.
%B Research Report / Max-Planck-Institut für Informatik
On the Bollob’as -- Eldridge conjecture for bipartite graphs
B. Csaba
Technical Report, 2003
B. Csaba
Technical Report, 2003
Abstract
Let $G$ be a simple graph on $n$ vertices. A conjecture of
Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over
k+1}$
then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$.
We strengthen this conjecture: we prove that if $H$ is bipartite,
$3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists
$\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$,
then
$H \subset G$.
Export
BibTeX
@techreport{Csaba2003,
TITLE = {On the Bollob{\textbackslash}'as -- Eldridge conjecture for bipartite graphs},
AUTHOR = {Csaba, Bela},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Csaba, Bela
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Bollob\'as -- Eldridge conjecture for bipartite graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B3A-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 29 p.
%X Let $G$ be a simple graph on $n$ vertices. A conjecture of
Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over
k+1}$
then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$.
We strengthen this conjecture: we prove that if $H$ is bipartite,
$3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists
$\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$,
then
$H \subset G$.
%B Research Report / Max-Planck-Institut für Informatik
On the probability of rendezvous in graphs
M. Dietzfelbinger and H. Tamaki
Technical Report, 2003
M. Dietzfelbinger and H. Tamaki
Technical Report, 2003
Abstract
In a simple graph $G$ without isolated nodes the
following random experiment is carried out:
each node chooses one
of its neighbors uniformly at random.
We say a rendezvous occurs
if there are adjacent nodes $u$ and $v$
such that $u$ chooses $v$
and $v$ chooses $u$;
the probability that this happens is denoted by $s(G)$.
M{\'e}tivier \emph{et al.} (2000) asked
whether it is true
that $s(G)\ge s(K_n)$
for all $n$-node graphs $G$,
where $K_n$ is the complete graph on $n$ nodes.
We show that this is the case.
Moreover, we show that evaluating $s(G)$
for a given graph $G$ is a \numberP-complete problem,
even if only $d$-regular graphs are considered,
for any $d\ge5$.
Export
BibTeX
@techreport{MPI-I-94-224,
TITLE = {On the probability of rendezvous in graphs},
AUTHOR = {Dietzfelbinger, Martin and Tamaki, Hisao},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dietzfelbinger, Martin
%A Tamaki, Hisao
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the probability of rendezvous in graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B83-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 30 p.
%X In a simple graph $G$ without isolated nodes the
following random experiment is carried out:
each node chooses one
of its neighbors uniformly at random.
We say a rendezvous occurs
if there are adjacent nodes $u$ and $v$
such that $u$ chooses $v$
and $v$ chooses $u$;
the probability that this happens is denoted by $s(G)$.
M{\'e}tivier \emph{et al.} (2000) asked
whether it is true
that $s(G)\ge s(K_n)$
for all $n$-node graphs $G$,
where $K_n$ is the complete graph on $n$ nodes.
We show that this is the case.
Moreover, we show that evaluating $s(G)$
for a given graph $G$ is a \numberP-complete problem,
even if only $d$-regular graphs are considered,
for any $d\ge5$.
%B Research Report / Max-Planck-Institut für Informatik
Almost random graphs with simple hash functions
M. Dietzfelbinger and P. Woelfel
Technical Report, 2003
M. Dietzfelbinger and P. Woelfel
Technical Report, 2003
Abstract
We describe a simple randomized construction for generating pairs of
hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1}
and W=[m] so that for every key set S\subseteq U with
n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node
set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure
that is essentially random. The construction combines d-wise
independent classes for d a relatively small constant with the
well-known technique of random offsets. While keeping the space
needed to store the description of h_1 and h_2 at O(n^zeta), for
zeta<1 fixed arbitrarily, we obtain a much smaller (constant)
evaluation time than previous constructions of this kind, which
involved Siegel's high-performance hash classes. The main new
technique is the combined analysis of the graph structure and the
inner structure of the hash functions, as well as a new way of looking
at the cycle structure of random (multi)graphs. The construction may
be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001),
to obtain a simpler and faster alternative to a recent construction of
"Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set
S, and to the simulation of shared memory on distributed memory
machines. We also describe a novel way of implementing (approximate)
d-wise independent hashing without using polynomials.
Export
BibTeX
@techreport{,
TITLE = {Almost random graphs with simple hash functions},
AUTHOR = {Dietzfelbinger, Martin and Woelfel, Philipp},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005},
NUMBER = {MPI-I-2003-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dietzfelbinger, Martin
%A Woelfel, Philipp
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Almost random graphs with simple hash functions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BB3-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 23 p.
%X We describe a simple randomized construction for generating pairs of
hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1}
and W=[m] so that for every key set S\subseteq U with
n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node
set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure
that is essentially random. The construction combines d-wise
independent classes for d a relatively small constant with the
well-known technique of random offsets. While keeping the space
needed to store the description of h_1 and h_2 at O(n^zeta), for
zeta<1 fixed arbitrarily, we obtain a much smaller (constant)
evaluation time than previous constructions of this kind, which
involved Siegel's high-performance hash classes. The main new
technique is the combined analysis of the graph structure and the
inner structure of the hash functions, as well as a new way of looking
at the cycle structure of random (multi)graphs. The construction may
be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001),
to obtain a simpler and faster alternative to a recent construction of
"Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set
S, and to the simulation of shared memory on distributed memory
machines. We also describe a novel way of implementing (approximate)
d-wise independent hashing without using polynomials.
%B Research Report / Max-Planck-Institut für Informatik
Specification of the Traits Classes for CGAL Arrangements of Curves
E. Fogel, D. Halperin, R. Wein, M. Teillaud, E. Berberich, A. Eigenwillig, S. Hert and L. Kettner
Technical Report, 2003
E. Fogel, D. Halperin, R. Wein, M. Teillaud, E. Berberich, A. Eigenwillig, S. Hert and L. Kettner
Technical Report, 2003
Export
BibTeX
@techreport{ecg:fhw-stcca-03,
TITLE = {Specification of the Traits Classes for {CGAL} Arrangements of Curves},
AUTHOR = {Fogel, Efi and Halperin, Dan and Wein, Ron and Teillaud, Monique and Berberich, Eric and Eigenwillig, Arno and Hert, Susan and Kettner, Lutz},
LANGUAGE = {eng},
NUMBER = {ECG-TR-241200-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia-Antipolis},
YEAR = {2003},
DATE = {2003},
TYPE = {Technical Report},
}
Endnote
%0 Report
%A Fogel, Efi
%A Halperin, Dan
%A Wein, Ron
%A Teillaud, Monique
%A Berberich, Eric
%A Eigenwillig, Arno
%A Hert, Susan
%A Kettner, Lutz
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Specification of the Traits Classes for CGAL Arrangements of Curves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-B4C6-5
%Y INRIA
%C Sophia-Antipolis
%D 2003
%B Technical Report
The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition
T. Hangelbroek, G. Nürnberger, C. Rössl, H.-P. Seidel and F. Zeilfelder
Technical Report, 2003
T. Hangelbroek, G. Nürnberger, C. Rössl, H.-P. Seidel and F. Zeilfelder
Technical Report, 2003
Abstract
We consider the linear space of piecewise polynomials in three variables
which are globally smooth, i.e., trivariate $C^1$ splines. The splines are
defined on a uniform tetrahedral partition $\Delta$, which is a natural
generalization of the four-directional mesh. By using Bernstein-B{\´e}zier
techniques, we establish formulae for the dimension of the $C^1$ splines
of arbitrary degree.
Export
BibTeX
@techreport{HangelbroekNurnbergerRoesslSeidelZeilfelder2003,
TITLE = {The dimension of \$C{\textasciicircum}1\$ splines of arbitrary degree on a tetrahedral partition},
AUTHOR = {Hangelbroek, Thomas and N{\"u}rnberger, G{\"u}nther and R{\"o}ssl, Christian and Seidel, Hans-Peter and Zeilfelder, Frank},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-005},
NUMBER = {MPI-I-2003-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We consider the linear space of piecewise polynomials in three variables which are globally smooth, i.e., trivariate $C^1$ splines. The splines are defined on a uniform tetrahedral partition $\Delta$, which is a natural generalization of the four-directional mesh. By using Bernstein-B{\´e}zier techniques, we establish formulae for the dimension of the $C^1$ splines of arbitrary degree.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hangelbroek, Thomas
%A Nürnberger, Günther
%A Rössl, Christian
%A Seidel, Hans-Peter
%A Zeilfelder, Frank
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6887-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 39 p.
%X We consider the linear space of piecewise polynomials in three variables
which are globally smooth, i.e., trivariate $C^1$ splines. The splines are
defined on a uniform tetrahedral partition $\Delta$, which is a natural
generalization of the four-directional mesh. By using Bernstein-B{\´e}zier
techniques, we establish formulae for the dimension of the $C^1$ splines
of arbitrary degree.
%B Research Report / Max-Planck-Institut für Informatik
Fast bound consistency for the global cardinality constraint
I. Katriel and S. Thiel
Technical Report, 2003
I. Katriel and S. Thiel
Technical Report, 2003
Abstract
We show an algorithm for bound consistency of {\em global cardinality
constraints}, which runs in time $O(n+n')$ plus the time required to sort
the
assignment variables by range endpoints, where $n$ is the number of
assignment
variables and $n'$ is the number of values in the union of their ranges.
We
thus offer a fast alternative to R\'egin's
arc consistency algorithm~\cite{Regin} which runs
in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm
also achieves bound consistency for the number of occurrences
of each value, which has not been done before.
Export
BibTeX
@techreport{,
TITLE = {Fast bound consistency for the global cardinality constraint},
AUTHOR = {Katriel, Irit and Thiel, Sven},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013},
NUMBER = {MPI-I-2003-1-013},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Thiel, Sven
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast bound consistency for the global cardinality constraint :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B1F-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 30 p.
%X We show an algorithm for bound consistency of {\em global cardinality
constraints}, which runs in time $O(n+n')$ plus the time required to sort
the
assignment variables by range endpoints, where $n$ is the number of
assignment
variables and $n'$ is the number of values in the union of their ranges.
We
thus offer a fast alternative to R\'egin's
arc consistency algorithm~\cite{Regin} which runs
in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm
also achieves bound consistency for the number of occurrences
of each value, which has not been done before.
%B Research Report / Max-Planck-Institut für Informatik
Sum-Multicoloring on paths
A. Kovács
Technical Report, 2003
A. Kovács
Technical Report, 2003
Abstract
The question, whether the preemptive Sum Multicoloring (pSMC)
problem is hard on paths was raised by Halldorsson
et al. ["Multi-coloring trees", Information and Computation,
180(2):113-129,2002]. The pSMC problem is a scheduling problem where the
pairwise conflicting jobs are represented by a conflict graph, and the
time lengths of jobs by integer weights on the nodes. The goal is to
schedule the jobs so that the sum of their finishing times is
minimized. In the paper we give an O(n^3p) time algorithm
for the pSMC problem on paths, where n is the number of nodes and p is
the largest time length. The result easily carries over to cycles.
Export
BibTeX
@techreport{,
TITLE = {Sum-Multicoloring on paths},
AUTHOR = {Kov{\'a}cs, Annamaria},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015},
NUMBER = {MPI-I-2003-1-015},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries over to cycles.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kovács, Annamaria
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sum-Multicoloring on paths :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B18-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 20 p.
%X The question, whether the preemptive Sum Multicoloring (pSMC)
problem is hard on paths was raised by Halldorsson
et al. ["Multi-coloring trees", Information and Computation,
180(2):113-129,2002]. The pSMC problem is a scheduling problem where the
pairwise conflicting jobs are represented by a conflict graph, and the
time lengths of jobs by integer weights on the nodes. The goal is to
schedule the jobs so that the sum of their finishing times is
minimized. In the paper we give an O(n^3p) time algorithm
for the pSMC problem on paths, where n is the number of nodes and p is
the largest time length. The result easily carries over to cycles.
%B Research Report / Max-Planck-Institut für Informatik
Selfish traffic allocation for server farms
P. Krysta, A. Czumaj and B. Vöcking
Technical Report, 2003
P. Krysta, A. Czumaj and B. Vöcking
Technical Report, 2003
Abstract
We study the price of selfish routing in non-cooperative
networks like the Internet. In particular, we investigate the
price of selfish routing using the coordination ratio and
other (e.g., bicriteria) measures in the recently introduced game
theoretic network model of Koutsoupias and Papadimitriou. We generalize
this model towards general, monotone families of cost functions and
cost functions from queueing theory. A summary of our main results
for general, monotone cost functions is as follows.
Export
BibTeX
@techreport{,
TITLE = {Selfish traffic allocation for server farms},
AUTHOR = {Krysta, Piotr and Czumaj, Artur and V{\"o}cking, Berthold},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011},
NUMBER = {MPI-I-2003-1-011},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krysta, Piotr
%A Czumaj, Artur
%A Vöcking, Berthold
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Selfish traffic allocation for server farms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B33-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 43 p.
%X We study the price of selfish routing in non-cooperative
networks like the Internet. In particular, we investigate the
price of selfish routing using the coordination ratio and
other (e.g., bicriteria) measures in the recently introduced game
theoretic network model of Koutsoupias and Papadimitriou. We generalize
this model towards general, monotone families of cost functions and
cost functions from queueing theory. A summary of our main results
for general, monotone cost functions is as follows.
%B Research Report / Max-Planck-Institut für Informatik
Scheduling and traffic allocation for tasks with bounded splittability
P. Krysta, P. Sanders and B. Vöcking
Technical Report, 2003
P. Krysta, P. Sanders and B. Vöcking
Technical Report, 2003
Abstract
We investigate variants of the well studied problem of scheduling
tasks on uniformly related machines to minimize the makespan.
In the $k$-splittable scheduling problem each task can be broken into
at most $k \ge 2$ pieces each of which has to be assigned to a different
machine. In the slightly more general SAC problem each task $j$ comes with
its own splittability parameter $k_j$, where we assume $k_j \ge 2$.
These problems are known to be $\npc$-hard and, hence, previous
research mainly focuses on approximation algorithms.
Our motivation to study these scheduling problems is traffic allocation
for server farms based on a variant of the Internet Domain Name Service
(DNS) that uses a stochastic splitting of request streams. Optimal
solutions for the $k$-splittable scheduling problem yield optimal
solutions for this traffic allocation problem. Approximation ratios,
however, do not translate from one problem to the other because of
non-linear latency functions. In fact, we can prove that the traffic
allocation problem with standard latency functions from Queueing Theory
cannot be approximated in polynomial time within any finite factor
because of the extreme behavior of these functions.
Because of the inapproximability, we turn our attention to fixed-parameter
tractable algorithms. Our main result is a polynomial time algorithm
computing an exact solution for the $k$-splittable scheduling problem as
well as the SAC problem for any fixed number of machines.
The running time of our algorithm increases exponentially with the
number of machines but is only linear in the number of tasks.
This result is the first proof that bounded splittability reduces
the complexity of scheduling as the unsplittable scheduling is known
to be $\npc$-hard already for two machines. Furthermore, since our
algorithm solves the scheduling problem exactly, it also solves the
traffic allocation problem that motivated our study.
Export
BibTeX
@techreport{MPI-I-2003-1-002,
TITLE = {Scheduling and traffic allocation for tasks with bounded splittability},
AUTHOR = {Krysta, Piotr and Sanders, Peter and V{\"o}cking, Berthold},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krysta, Piotr
%A Sanders, Peter
%A Vöcking, Berthold
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Scheduling and traffic allocation for tasks with bounded splittability :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BD1-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 15 p.
%X We investigate variants of the well studied problem of scheduling
tasks on uniformly related machines to minimize the makespan.
In the $k$-splittable scheduling problem each task can be broken into
at most $k \ge 2$ pieces each of which has to be assigned to a different
machine. In the slightly more general SAC problem each task $j$ comes with
its own splittability parameter $k_j$, where we assume $k_j \ge 2$.
These problems are known to be $\npc$-hard and, hence, previous
research mainly focuses on approximation algorithms.
Our motivation to study these scheduling problems is traffic allocation
for server farms based on a variant of the Internet Domain Name Service
(DNS) that uses a stochastic splitting of request streams. Optimal
solutions for the $k$-splittable scheduling problem yield optimal
solutions for this traffic allocation problem. Approximation ratios,
however, do not translate from one problem to the other because of
non-linear latency functions. In fact, we can prove that the traffic
allocation problem with standard latency functions from Queueing Theory
cannot be approximated in polynomial time within any finite factor
because of the extreme behavior of these functions.
Because of the inapproximability, we turn our attention to fixed-parameter
tractable algorithms. Our main result is a polynomial time algorithm
computing an exact solution for the $k$-splittable scheduling problem as
well as the SAC problem for any fixed number of machines.
The running time of our algorithm increases exponentially with the
number of machines but is only linear in the number of tasks.
This result is the first proof that bounded splittability reduces
the complexity of scheduling as the unsplittable scheduling is known
to be $\npc$-hard already for two machines. Furthermore, since our
algorithm solves the scheduling problem exactly, it also solves the
traffic allocation problem that motivated our study.
%B Research Report / Max-Planck-Institut für Informatik
Visualization of volume data with quadratic super splines
C. Rössl, F. Zeilfelder, G. Nürnberger and H.-P. Seidel
Technical Report, 2003
C. Rössl, F. Zeilfelder, G. Nürnberger and H.-P. Seidel
Technical Report, 2003
Abstract
We develop a new approach to reconstruct non-discrete models from gridded
volume samples. As a model, we use quadratic, trivariate super splines on
a uniform tetrahedral partition $\Delta$. The approximating splines are
determined in a natural and completely symmetric way by averaging local
data samples such that appropriate smoothness conditions are automatically
satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of
total degree two which provides several advantages including the e cient
computation, evaluation and visualization of the model. We apply
Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric
Design to compute and evaluate the trivariate spline and its gradient.
With this approach the volume data can be visualized e ciently e.g. with
isosurface ray-casting. Along an arbitrary ray the splines are univariate,
piecewise quadratics and thus the exact intersection for a prescribed
isovalue can be easily determined in an analytic and exact way. Our
results confirm the e ciency of the method and demonstrate a high visual
quality for rendered isosurfaces.
Export
BibTeX
@techreport{RoesslZeilfelderNurnbergerSeidel2003,
TITLE = {Visualization of volume data with quadratic super splines},
AUTHOR = {R{\"o}ssl, Christian and Zeilfelder, Frank and N{\"u}rnberger, G{\"u}nther and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-4-006},
NUMBER = {MPI-I-2004-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic, trivariate super splines on a uniform tetrahedral partition $\Delta$. The approximating splines are determined in a natural and completely symmetric way by averaging local data samples such that appropriate smoothness conditions are automatically satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of total degree two which provides several advantages including the e cient computation, evaluation and visualization of the model. We apply Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric Design to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized e ciently e.g. with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined in an analytic and exact way. Our results confirm the e ciency of the method and demonstrate a high visual quality for rendered isosurfaces.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Rössl, Christian
%A Zeilfelder, Frank
%A Nürnberger, Günther
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Visualization of volume data with quadratic super splines :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AE8-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 15 p.
%X We develop a new approach to reconstruct non-discrete models from gridded
volume samples. As a model, we use quadratic, trivariate super splines on
a uniform tetrahedral partition $\Delta$. The approximating splines are
determined in a natural and completely symmetric way by averaging local
data samples such that appropriate smoothness conditions are automatically
satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of
total degree two which provides several advantages including the e cient
computation, evaluation and visualization of the model. We apply
Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric
Design to compute and evaluate the trivariate spline and its gradient.
With this approach the volume data can be visualized e ciently e.g. with
isosurface ray-casting. Along an arbitrary ray the splines are univariate,
piecewise quadratics and thus the exact intersection for a prescribed
isovalue can be easily determined in an analytic and exact way. Our
results confirm the e ciency of the method and demonstrate a high visual
quality for rendered isosurfaces.
%B Research Report
Asynchronous parallel disk sorting
P. Sanders and R. Dementiev
Technical Report, 2003
P. Sanders and R. Dementiev
Technical Report, 2003
Abstract
We develop an algorithm for parallel disk sorting, whose I/O cost
approaches the lower bound and that guarantees almost perfect
overlap between I/O and computation. Previous algorithms have
either suboptimal I/O volume or cannot guarantee that I/O and
computations can always be overlapped. We give an efficient
implementation that can (at least) compete with the best practical
implementations but gives additional performance guarantees.
For the experiments we have configured a state of the art machine
that can sustain full bandwidth I/O with eight disks and is very cost
effective.
Export
BibTeX
@techreport{MPI-I-2003-1-001,
TITLE = {Asynchronous parallel disk sorting},
AUTHOR = {Sanders, Peter and Dementiev, Roman},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%A Dementiev, Roman
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Asynchronous parallel disk sorting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C80-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 22 p.
%X We develop an algorithm for parallel disk sorting, whose I/O cost
approaches the lower bound and that guarantees almost perfect
overlap between I/O and computation. Previous algorithms have
either suboptimal I/O volume or cannot guarantee that I/O and
computations can always be overlapped. We give an efficient
implementation that can (at least) compete with the best practical
implementations but gives additional performance guarantees.
For the experiments we have configured a state of the art machine
that can sustain full bandwidth I/O with eight disks and is very cost
effective.
%B Research Report / Max-Planck-Institut für Informatik
Polynomial time algorithms for network information flow
P. Sanders
Technical Report, 2003
P. Sanders
Technical Report, 2003
Abstract
The famous max-flow min-cut theorem states that a source node $s$ can
send information through a network (V,E) to a sink node t at a
rate determined by the min-cut separating s and t. Recently it
has been shown that this rate can also be achieved for multicasting to
several sinks provided that the intermediate nodes are allowed to
reencode the information they receive. We give
polynomial time algorithms for solving this problem. We additionally
underline the potential benefit of coding by showing that multicasting
without coding sometimes only allows a rate that is a factor
Omega(log |V|) smaller.
Export
BibTeX
@techreport{,
TITLE = {Polynomial time algorithms for network information flow},
AUTHOR = {Sanders, Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008},
NUMBER = {MPI-I-2003-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Polynomial time algorithms for network information flow :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B4A-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 18 p.
%X The famous max-flow min-cut theorem states that a source node $s$ can
send information through a network (V,E) to a sink node t at a
rate determined by the min-cut separating s and t. Recently it
has been shown that this rate can also be achieved for multicasting to
several sinks provided that the intermediate nodes are allowed to
reencode the information they receive. We give
polynomial time algorithms for solving this problem. We additionally
underline the potential benefit of coding by showing that multicasting
without coding sometimes only allows a rate that is a factor
Omega(log |V|) smaller.
%B Research Report / Max-Planck-Institut für Informatik
Cross-monotonic cost sharing methods for connected facility location games
G. Schäfer and S. Leonardi
Technical Report, 2003
G. Schäfer and S. Leonardi
Technical Report, 2003
Abstract
We present cost sharing methods for connected facility location
games that are cross-monotonic, competitive, and recover a constant
fraction of the cost of the constructed solution.
The novelty of this paper is that we use randomized algorithms, and
that we share the expected cost among the participating users.
As a consequence, our cost sharing methods are simple, and achieve
attractive approximation ratios for various NP-hard problems.
We also provide a primal-dual cost sharing method for the connected
facility location game with opening costs.
Export
BibTeX
@techreport{,
TITLE = {Cross-monotonic cost sharing methods for connected facility location games},
AUTHOR = {Sch{\"a}fer, Guido and Leonardi, Stefano},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017},
NUMBER = {MPI-I-2003-1-017},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%A Leonardi, Stefano
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Cross-monotonic cost sharing methods for connected facility location games :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B12-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 10 p.
%X We present cost sharing methods for connected facility location
games that are cross-monotonic, competitive, and recover a constant
fraction of the cost of the constructed solution.
The novelty of this paper is that we use randomized algorithms, and
that we share the expected cost among the participating users.
As a consequence, our cost sharing methods are simple, and achieve
attractive approximation ratios for various NP-hard problems.
We also provide a primal-dual cost sharing method for the connected
facility location game with opening costs.
%B Research Report / Max-Planck-Institut für Informatik
Topology matters: smoothed competitive analysis of metrical task systems
G. Schäfer and N. Sivadasan
Technical Report, 2003
G. Schäfer and N. Sivadasan
Technical Report, 2003
Abstract
We consider online problems that can be modeled as \emph{metrical
task systems}:
An online algorithm resides in a graph $G$ of $n$ nodes and may move
in this graph at a cost equal to the distance.
The algorithm has to service a sequence of \emph{tasks} that arrive
online; each task specifies for each node a \emph{request cost} that
is incurred if the algorithm services the task in this particular node.
The objective is to minimize the total request cost plus the total
travel cost.
Several important online problems can be modeled as metrical task
systems.
Borodin, Linial and Saks \cite{BLS92} presented a deterministic
\emph{work function algorithm} (WFA) for metrical task systems
having a tight competitive ratio of $2n-1$.
However, the competitive ratio often is an over-pessimistic
estimation of the true performance of an online algorithm.
In this paper, we present a \emph{smoothed competitive analysis}
of WFA.
Given an adversarial task sequence, we smoothen the request costs
by means of a symmetric additive smoothing model and analyze the
competitive ratio of WFA on the smoothed task sequence.
Our analysis reveals that the smoothed competitive ratio of WFA
is much better than $O(n)$ and that it depends on several
topological parameters of the underlying graph $G$, such as
the minimum edge length $U_{\min}$, the maximum degree $D$,
and the edge diameter $diam$.
Assuming that the ratio between the maximum and the minimum edge length
of $G$ is bounded by a constant, the smoothed competitive ratio of WFA
becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and
$O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where
$\sigma$ denotes the standard deviation of the smoothing distribution.
For example, already for perturbations with $\sigma = \Theta(U_{\min})$
the competitive ratio reduces to $O(\log n)$ on a clique and to
$O(\sqrt{n})$ on a line.
We also prove that for a large class of graphs these bounds are
asymptotically tight.
Furthermore, we provide two lower bounds for any arbitrary graph.
We obtain a better bound of
$O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on
the smoothed competitive ratio of WFA if each adversarial
task contains at most $\beta$ non-zero entries.
Our analysis holds for various probability distributions,
including the uniform and the normal distribution.
We also provide the first average case analysis of WFA.
We prove that WFA has $O(\log(D))$ expected competitive
ratio if the request costs are chosen randomly from an arbitrary
non-increasing distribution with standard deviation.
Export
BibTeX
@techreport{,
TITLE = {Topology matters: smoothed competitive analysis of metrical task systems},
AUTHOR = {Sch{\"a}fer, Guido and Sivadasan, Naveen},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016},
NUMBER = {MPI-I-2003-1-016},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%A Sivadasan, Naveen
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Topology matters: smoothed competitive analysis of metrical task systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B15-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 28 p.
%X We consider online problems that can be modeled as \emph{metrical
task systems}:
An online algorithm resides in a graph $G$ of $n$ nodes and may move
in this graph at a cost equal to the distance.
The algorithm has to service a sequence of \emph{tasks} that arrive
online; each task specifies for each node a \emph{request cost} that
is incurred if the algorithm services the task in this particular node.
The objective is to minimize the total request cost plus the total
travel cost.
Several important online problems can be modeled as metrical task
systems.
Borodin, Linial and Saks \cite{BLS92} presented a deterministic
\emph{work function algorithm} (WFA) for metrical task systems
having a tight competitive ratio of $2n-1$.
However, the competitive ratio often is an over-pessimistic
estimation of the true performance of an online algorithm.
In this paper, we present a \emph{smoothed competitive analysis}
of WFA.
Given an adversarial task sequence, we smoothen the request costs
by means of a symmetric additive smoothing model and analyze the
competitive ratio of WFA on the smoothed task sequence.
Our analysis reveals that the smoothed competitive ratio of WFA
is much better than $O(n)$ and that it depends on several
topological parameters of the underlying graph $G$, such as
the minimum edge length $U_{\min}$, the maximum degree $D$,
and the edge diameter $diam$.
Assuming that the ratio between the maximum and the minimum edge length
of $G$ is bounded by a constant, the smoothed competitive ratio of WFA
becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and
$O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where
$\sigma$ denotes the standard deviation of the smoothing distribution.
For example, already for perturbations with $\sigma = \Theta(U_{\min})$
the competitive ratio reduces to $O(\log n)$ on a clique and to
$O(\sqrt{n})$ on a line.
We also prove that for a large class of graphs these bounds are
asymptotically tight.
Furthermore, we provide two lower bounds for any arbitrary graph.
We obtain a better bound of
$O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on
the smoothed competitive ratio of WFA if each adversarial
task contains at most $\beta$ non-zero entries.
Our analysis holds for various probability distributions,
including the uniform and the normal distribution.
We also provide the first average case analysis of WFA.
We prove that WFA has $O(\log(D))$ expected competitive
ratio if the request costs are chosen randomly from an arbitrary
non-increasing distribution with standard deviation.
%B Research Report / Max-Planck-Institut für Informatik
A note on the smoothed complexity of the single-source shortest path problem
G. Schäfer
Technical Report, 2003
G. Schäfer
Technical Report, 2003
Abstract
Banderier, Beier and Mehlhorn showed that the single-source shortest
path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are
$K$-bit integers and the last $k$ least significant bits are perturbed
randomly. Their analysis holds if each bit is set to $0$ or $1$ with
probability $\frac{1}{2}$.
We extend their result and show that the same analysis goes through for
a large class of probability distributions:
We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of
each edge cost are replaced by some random number chosen from $[0,
\dots, 2^k-1]$ according to some \emph{arbitrary} probability
distribution $\pdist$ whose expectation is not too close to zero.
We do not require that the edge costs are perturbed independently.
The same time bound holds even if the random perturbations are
heterogeneous.
If $k=K$ our analysis implies a linear average case running time for
various probability distributions.
We also show that the running time is $O(m+n(K-k))$ with high
probability if the random replacements are chosen independently.
Export
BibTeX
@techreport{,
TITLE = {A note on the smoothed complexity of the single-source shortest path problem},
AUTHOR = {Sch{\"a}fer, Guido},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018},
NUMBER = {MPI-I-2003-1-018},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$ with high probability if the random replacements are chosen independently.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A note on the smoothed complexity of the single-source shortest path problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B0D-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 8 p.
%X Banderier, Beier and Mehlhorn showed that the single-source shortest
path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are
$K$-bit integers and the last $k$ least significant bits are perturbed
randomly. Their analysis holds if each bit is set to $0$ or $1$ with
probability $\frac{1}{2}$.
We extend their result and show that the same analysis goes through for
a large class of probability distributions:
We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of
each edge cost are replaced by some random number chosen from $[0,
\dots, 2^k-1]$ according to some \emph{arbitrary} probability
distribution $\pdist$ whose expectation is not too close to zero.
We do not require that the edge costs are perturbed independently.
The same time bound holds even if the random perturbations are
heterogeneous.
If $k=K$ our analysis implies a linear average case running time for
various probability distributions.
We also show that the running time is $O(m+n(K-k))$ with high
probability if the random replacements are chosen independently.
%B Research Report / Max-Planck-Institut für Informatik
Average case and smoothed competitive analysis of the multi-level feedback algorithm
G. Schäfer, L. Becchetti, S. Leonardi, A. Marchetti-Spaccamela and T. Vredeveld
Technical Report, 2003
G. Schäfer, L. Becchetti, S. Leonardi, A. Marchetti-Spaccamela and T. Vredeveld
Technical Report, 2003
Abstract
In this paper we introduce the notion of smoothed competitive analysis
of online algorithms. Smoothed analysis has been proposed by Spielman
and Teng [\emph{Smoothed analysis of algorithms: Why the simplex
algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour
of algorithms that work well in practice while performing very poorly
from a worst case analysis point of view.
We apply this notion to analyze the Multi-Level Feedback (MLF)
algorithm to minimize the total flow time on a sequence of jobs
released over time when the processing time of a job is only known at time of
completion.
The initial processing times are integers in the range $[1,2^K]$.
We use a partial bit randomization model, where the initial processing
times are smoothened by changing the $k$ least significant bits under
a quite general class of probability distributions.
We show that MLF admits a smoothed competitive ratio of
$O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes
the standard deviation of the distribution.
In particular, we obtain a competitive ratio of $O(2^{K-k})$ if
$\sigma = \Theta(2^k)$.
We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic
algorithm that is run on processing times smoothened according to the
partial bit randomization model.
For various other smoothening models, including the additive symmetric
smoothening model used by Spielman and Teng, we give a higher lower
bound of $\Omega(2^K)$.
A direct consequence of our result is also the first average case
analysis of MLF. We show a constant expected ratio of the total flow time of
MLF to the optimum under several distributions including the uniform
distribution.
Export
BibTeX
@techreport{,
TITLE = {Average case and smoothed competitive analysis of the multi-level feedback algorithm},
AUTHOR = {Sch{\"a}fer, Guido and Becchetti, Luca and Leonardi, Stefano and Marchetti-Spaccamela, Alberto and Vredeveld, Tjark},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014},
NUMBER = {MPI-I-2003-1-014},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%A Becchetti, Luca
%A Leonardi, Stefano
%A Marchetti-Spaccamela, Alberto
%A Vredeveld, Tjark
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Average case and smoothed competitive analysis of the multi-level feedback algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B1C-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 31 p.
%X In this paper we introduce the notion of smoothed competitive analysis
of online algorithms. Smoothed analysis has been proposed by Spielman
and Teng [\emph{Smoothed analysis of algorithms: Why the simplex
algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour
of algorithms that work well in practice while performing very poorly
from a worst case analysis point of view.
We apply this notion to analyze the Multi-Level Feedback (MLF)
algorithm to minimize the total flow time on a sequence of jobs
released over time when the processing time of a job is only known at time of
completion.
The initial processing times are integers in the range $[1,2^K]$.
We use a partial bit randomization model, where the initial processing
times are smoothened by changing the $k$ least significant bits under
a quite general class of probability distributions.
We show that MLF admits a smoothed competitive ratio of
$O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes
the standard deviation of the distribution.
In particular, we obtain a competitive ratio of $O(2^{K-k})$ if
$\sigma = \Theta(2^k)$.
We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic
algorithm that is run on processing times smoothened according to the
partial bit randomization model.
For various other smoothening models, including the additive symmetric
smoothening model used by Spielman and Teng, we give a higher lower
bound of $\Omega(2^K)$.
A direct consequence of our result is also the first average case
analysis of MLF. We show a constant expected ratio of the total flow time of
MLF to the optimum under several distributions including the uniform
distribution.
%B Research Report / Max-Planck-Institut für Informatik
The Diamond Operator for Real Algebraic Numbers
S. Schmitt
Technical Report, 2003
S. Schmitt
Technical Report, 2003
Abstract
Real algebraic numbers are real roots of polynomials with integral
coefficients. They can be represented as expressions whose
leaves are integers and whose internal nodes are additions, subtractions,
multiplications, divisions, k-th root operations for integral k,
or taking roots of polynomials whose coefficients are given by the value
of subexpressions. This last operator is called the diamond operator.
I explain the implementation of the diamond operator in a LEDA extension
package.
Export
BibTeX
@techreport{s-doran-03,
TITLE = {The Diamond Operator for Real Algebraic Numbers},
AUTHOR = {Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ECG-TR-243107-01},
INSTITUTION = {Effective Computational Geometry for Curves and Surfaces},
ADDRESS = {Sophia Antipolis, FRANCE},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension package.},
}
Endnote
%0 Report
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The Diamond Operator for Real Algebraic Numbers :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EBB1-B
%Y Effective Computational Geometry for Curves and Surfaces
%C Sophia Antipolis, FRANCE
%D 2003
%X Real algebraic numbers are real roots of polynomials with integral
coefficients. They can be represented as expressions whose
leaves are integers and whose internal nodes are additions, subtractions,
multiplications, divisions, k-th root operations for integral k,
or taking roots of polynomials whose coefficients are given by the value
of subexpressions. This last operator is called the diamond operator.
I explain the implementation of the diamond operator in a LEDA extension
package.
A linear time heuristic for the branch-decomposition of planar graphs
H. Tamaki
Technical Report, 2003a
H. Tamaki
Technical Report, 2003a
Abstract
Let $G$ be a biconnected planar graph given together with its planar drawing.
A {\em face-vertex walk} in $G$ of length $k$
is an alternating sequence $x_0, \ldots x_k$ of
vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is
a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident
with each other for $1 \leq i \leq k$.
For each vertex or face $x$ of $G$, let $\alpha_x$ denote
the length of the shortest face-vertex walk from the outer face of $G$ to $x$.
Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$.
We show that there always exits a branch-decomposition of $G$ with width
$\alpha_G$ and that such a decomposition
can be constructed in linear time. We also give experimental results,
in which we compare the width of our decomposition with the optimal
width and with the width obtained by some heuristics for general
graphs proposed by previous researchers, on test instances used
by those researchers.
On 56 out of the total 59 test instances, our
method gives a decomposition within additive 2 of the optimum width and
on 33 instances it achieves the optimum width.
Export
BibTeX
@techreport{,
TITLE = {A linear time heuristic for the branch-decomposition of planar graphs},
AUTHOR = {Tamaki, Hisao},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010},
NUMBER = {MPI-I-2003-1-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i -- 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i -- 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Tamaki, Hisao
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A linear time heuristic for the branch-decomposition of planar graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B37-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 18 p.
%X Let $G$ be a biconnected planar graph given together with its planar drawing.
A {\em face-vertex walk} in $G$ of length $k$
is an alternating sequence $x_0, \ldots x_k$ of
vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is
a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident
with each other for $1 \leq i \leq k$.
For each vertex or face $x$ of $G$, let $\alpha_x$ denote
the length of the shortest face-vertex walk from the outer face of $G$ to $x$.
Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$.
We show that there always exits a branch-decomposition of $G$ with width
$\alpha_G$ and that such a decomposition
can be constructed in linear time. We also give experimental results,
in which we compare the width of our decomposition with the optimal
width and with the width obtained by some heuristics for general
graphs proposed by previous researchers, on test instances used
by those researchers.
On 56 out of the total 59 test instances, our
method gives a decomposition within additive 2 of the optimum width and
on 33 instances it achieves the optimum width.
%B Research Report / Max-Planck-Institut für Informatik
Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem
H. Tamaki
Technical Report, 2003b
H. Tamaki
Technical Report, 2003b
Abstract
A strategy of merging several traveling salesman tours
into a better tour, called ACC (Alternating Cycles Contribution)
is introduced. Two algorithms embodying this strategy for
geometric instances is
implemented and used to enhance Helsgaun's implementaton
of his variant of the Lin-Kernighan heuristic. Experiments
on the large instances in TSPLIB show that a significant
improvement of performance is obtained.
These algorithms were used in September 2002 to find a
new best tour for the largest instance pla85900
in TSPLIB.
Export
BibTeX
@techreport{,
TITLE = {Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem},
AUTHOR = {Tamaki, Hisao},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007},
NUMBER = {MPI-I-2003-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Tamaki, Hisao
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B66-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 22 p.
%X A strategy of merging several traveling salesman tours
into a better tour, called ACC (Alternating Cycles Contribution)
is introduced. Two algorithms embodying this strategy for
geometric instances is
implemented and used to enhance Helsgaun's implementaton
of his variant of the Lin-Kernighan heuristic. Experiments
on the large instances in TSPLIB show that a significant
improvement of performance is obtained.
These algorithms were used in September 2002 to find a
new best tour for the largest instance pla85900
in TSPLIB.
%B Research Report / Max-Planck-Institut für Informatik
3D acquisition of mirroring objects
M. Tarini, H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2003
M. Tarini, H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2003
Abstract
Objects with mirroring optical characteristics are left out of the
scope of most 3D scanning methods. We present here a new automatic
acquisition approach, shape-from-distortion, that focuses on that
category of objects, requires only a still camera and a color
monitor, and produces range scans (plus a normal and a reflectance
map) of the target.
Our technique consists of two steps: first, an improved
environment matte is captured for the mirroring object, using the
interference of patterns with different frequencies in order to
obtain sub-pixel accuracy. Then, the matte is converted into a
normal and a depth map by exploiting the self coherence of a
surface when integrating the normal map along different paths.
The results show very high accuracy, capturing even smallest
surface details. The acquired depth maps can be further processed
using standard techniques to produce a complete 3D mesh of the
object.
Export
BibTeX
@techreport{TariniLenschGoeseleSeidel2003,
TITLE = {{3D} acquisition of mirroring objects},
AUTHOR = {Tarini, Marco and Lensch, Hendrik P. A. and G{\"o}sele, Michael and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies in order to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Tarini, Marco
%A Lensch, Hendrik P. A.
%A Gösele, Michael
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T 3D acquisition of mirroring objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AF5-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 37 p.
%X Objects with mirroring optical characteristics are left out of the
scope of most 3D scanning methods. We present here a new automatic
acquisition approach, shape-from-distortion, that focuses on that
category of objects, requires only a still camera and a color
monitor, and produces range scans (plus a normal and a reflectance
map) of the target.
Our technique consists of two steps: first, an improved
environment matte is captured for the mirroring object, using the
interference of patterns with different frequencies in order to
obtain sub-pixel accuracy. Then, the matte is converted into a
normal and a depth map by exploiting the self coherence of a
surface when integrating the normal map along different paths.
The results show very high accuracy, capturing even smallest
surface details. The acquired depth maps can be further processed
using standard techniques to produce a complete 3D mesh of the
object.
%B Research Report / Max-Planck-Institut für Informatik
A flexible and versatile studio for synchronized multi-view video recording
C. Theobalt, M. Li, M. A. Magnor and H.-P. Seidel
Technical Report, 2003
C. Theobalt, M. Li, M. A. Magnor and H.-P. Seidel
Technical Report, 2003
Abstract
In recent years, the convergence of Computer Vision and Computer Graphics has put forth
new research areas that work on scene reconstruction from and analysis of multi-view video
footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint
in real-time from a set of real multi-view input video streams.
The analysis of real-world scenes from multi-view video
to extract motion information or reflection models is another field of research that
greatly benefits from high-quality input data.
Building a recording setup for multi-view video involves a great effort on the hardware
as well as the software side. The amount of image data to be processed is huge,
a decent lighting and camera setup is essential for a naturalistic scene appearance and
robust background subtraction, and the computing infrastructure has to enable
real-time processing of the recorded material.
This paper describes the recording setup for multi-view video acquisition that enables the
synchronized recording
of dynamic scenes from multiple camera positions under controlled conditions. The requirements
to the room and their implementation in the separate components of the studio are described in detail.
The efficiency and flexibility of the room is demonstrated on the basis of the results
that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical
motion capture and a model-based free-viewpoint video system for human actors.
~
Export
BibTeX
@techreport{TheobaltMingMagnorSeidel2003,
TITLE = {A flexible and versatile studio for synchronized multi-view video recording},
AUTHOR = {Theobalt, Christian and Li, Ming and Magnor, Marcus A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors. ~},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Theobalt, Christian
%A Li, Ming
%A Magnor, Marcus A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A flexible and versatile studio for synchronized multi-view video recording :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AF2-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 18 p.
%X In recent years, the convergence of Computer Vision and Computer Graphics has put forth
new research areas that work on scene reconstruction from and analysis of multi-view video
footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint
in real-time from a set of real multi-view input video streams.
The analysis of real-world scenes from multi-view video
to extract motion information or reflection models is another field of research that
greatly benefits from high-quality input data.
Building a recording setup for multi-view video involves a great effort on the hardware
as well as the software side. The amount of image data to be processed is huge,
a decent lighting and camera setup is essential for a naturalistic scene appearance and
robust background subtraction, and the computing infrastructure has to enable
real-time processing of the recorded material.
This paper describes the recording setup for