Commonsense knowledge about object properties, human behavior and general concepts is crucial for robust AI applications. However, automatic acquisition of this knowledge is challenging because of sparseness and bias in online sources.
Ascent (Advanced Semantics for Commonsense Knowledge Extraction) is a pipeline for automatically collecting, extracting and consolidating commonsense knowledge (CSK) from the web. Ascent is capable of extracting facet-enriched assertions, overcoming the common limitations of the triple-based knowledge model in traditional knowledge bases (KBs). Ascent also captures composite concepts with subgroups and related aspects, supplying even more expressiveness to CSK assertions.
- Demo: https://ascent.mpi-inf.mpg.de
- Download: https://ascent.mpi-inf.mpg.de/download
- Code: https://github.com/phongnt570/ascent
- Tuan-Phong Nguyen, Simon Razniewski, Gerhard Weikum. Advanced Semantics for Commonsense Knowledge Extraction. WWW 2021. [pdf]
- Tuan-Phong Nguyen, Simon Razniewski, Gerhard Weikum. Inside ASCENT: Exploring a Deep Commonsense Knowledge Base and its Usage in Question Answering. ACL 2021 - System Demonstrations. [pdf]
Commonsense knowledge about object properties, human behavior and general concepts is crucial for robust AI applications. However, automatic acquisition of this knowledge is challenging because of sparseness and bias in online sources. This paper presents Quasimodo, a methodology and tool suite for distilling commonsense properties from non-standard web sources. We devise novel ways of tapping into search-engine query logs and QA forums, and combining the resulting candidate assertions with statistical cues from encyclopedias, books and image tags in a corroboration step. Unlike prior work on commonsense knowledge bases, Quasimodo focuses on salient properties that are typically associated with certain objects or concepts. Extensive evaluations, including extrinsic use-case studies, show that Quasimodo provides better coverage than state-of-the-art baselines with comparable quality.
Commonsense knowledge (CSK) supports a variety of AI applications, from visual understanding to chatbots. Prior works on acquiring CSK, such as ConceptNet, have compiled statements that associate concepts, like everyday objects or activities, with properties that hold for most or some instances of the concept. Each concept is treated in isolation from other concepts, and the only quantitative measure (or ranking) of properties is a confidence score that the statement is valid. This paper aims to overcome these limitations by introducing a multi-faceted model of CSK statements and methods for joint reasoning over sets of inter-related statements. Our model captures four different dimensions of CSK statements: plausibility, typicality, remarkability and salience, with scoring and ranking along each dimension. For example, hyenas drinking water is typical but not salient, whereas hyenas eating carcasses is salient. For reasoning and ranking, we develop a method with soft constraints, to couple the inference over concepts that are related in in a taxonomic hierarchy. The reasoning is cast into an integer linear programming (ILP), and we leverage the theory of reduction costs of a relaxed LP to compute informative rankings. This methodology is applied to several large CSK collections. Our evaluation shows that we can consolidate these inputs into much cleaner and more expressive knowledge.
- Paper: Joint Reasoning for Multi-Faceted Commonsense Knowledge, Yohan Chalier, Simon Razniewski and Gerhard Weikum, AKBC, 2020 [pdf]
- Demo: https://dice.mpi-inf.mpg.de/
- Code: https://github.com/ychalier/dice
- Data: https://www.dropbox.com/sh/yqn3o1ngnx8c8fz/AADD2jHxBZm31IZ0n3U_Dnf8a?dl=0
WebChild is a large collection of commonsense knowledge, automatically extracted and disambiguated from Web contents. WebChild contains triples that connect nouns with adjectives via fine-grained relations like hasShape, hasTaste, evokesEmotion, etc. The arguments of these assertions, nouns and adjectives, are disambiguated by mapping them onto their proper WordNet senses.
Large-scale experiments demonstrate the high accuracy (more than 80 percent) and coverage (more than four million fine grained disambiguated assertions) of WebChild.
HowToKB is the first large-scale knowledge base which represents how-to (task) knowledge. Each task is represented by a frame with attributes for parent task, preceding sub-task, following sub-task, required tools or other items, and linkage to visual illustrations.
- Distilling Task Knowledge from How-to Communities, Cuong Xuan Chu, Niket Tandon, Gerhard Weikum, WWW 2017 [pdf]