Uncommonsense

Overview

This page is dedicated to the project on Informative Negative Knowledge about Everyday Concepts.

Commonsense knowledge about everyday concepts is an important asset for AI applications, such as question answering and chatbots. Recently, we have seen an increasing interest in the construction of structured commonsense knowledge bases (CSKBs). An important part of human commonsense is about properties that do not apply to concepts, yet existing CSKBs only store positive statements. Moreover, since CSKBs operate under the open-world assumption, absent statements are considered to have unknown truth rather than being invalid. We present the UNCOMMONSENSE framework for materializing informative negative commonsense statements. Given a target concept, comparable concepts are identified in the CSKB, for which a local closed-world assumption is postulated. This way, positive statements about comparable concepts that are absent for the target concept become seeds for negative statement candidates. The large set of candidates is then scrutinized, pruned and ranked by informativeness. 

Website: https://uncommonsense.mpi-inf.mpg.de/

Publications

  • Hiba Arnaout, Simon Razniewski, Gerhard Weikum, and Jeff Z. Pan, UnCommonSense: Informative Negative Knowledge about Everyday Concepts. CIKM'22 [PDF]
  • Hiba Arnaout, Tuan-Phong Nguyen, Simon Razniewski, Gerhard Weikum, and Jeff Z. Pan, UnCommonSense in Action! Informative Negations for Commonsense Knowledge Bases. WSDM'23 [DEMO] [VIDEO] [PDF]

Datasets

Download 6.2million negations about 8k everyday concepts (up to top-k per concept). 
Details: concepts source, methodology in strict-rank mode and with provenances.

Samples:


{
    "subject": "gorilla",
    "predicate": "HasProperty",
    "object": "territorial",
    "tail_phrase": "be territorial",
    "score": 0.23, 
    "strict_siblings": 
        [
            {
                "wild animal": ["tiger", "lion", "monkey", "chimpanzee"]
            }, 
            
            {
                "species": ["wombat", "tarsier", "gibbon"]
            }
        ]
}

{
    "subject": "tabbouleh",
    "predicate": "ReceivesAction",
    "object": "baked",
    "tail_phrase": "be baked"
    "score": 0.17,
    "strict_siblings": 
        [
            {
                "food": ["loaf", "samosa", "flatbread"]
            }, 
            
            {
                "side dish": ["casserole", "pasta"]
            }
        ]
}

Download [part1, part213.6million negations about 8k everyday concepts (complete final set of negations).
Details: concepts source, methodology in relaxed-rank mode.

Samples:


{
    "subject": "gorilla",
    "predicate": "HasA",
    "object": "tail",
    "tail_phrase": "have tail",
    "score": 0.15,
    "strict_siblings": 
        [
            "monkey",
            "lemur"
        ], 
    "relaxed_siblings": 
        [
            {
                "subject": "tiger",
                "predicate": "HasA",
                "object": "long tail"
            },
            
            {
                "subject": "baboon",
                "predicate": "HasA",
                "object": "long tail"
            },
            
            {
                "subject": "cheetah",
                "predicate": "HasA",
                "object": "long muscular tail"
            }
        ],
    "siblings": 
        [
            {
                "wild animal": ["baboon", "monkey", "lemur", "cheetah", "tiger"]
            },
            
        ]
}

{
    "subject": "tabbouleh",
    "predicate": "ReceivesAction",
    "object": "baked",
    "tail_phrase": "be baked",
    "score": 0.7,
    "strict_siblings":
        [
            "flatbread",
            "samosa",
            "pasta",
            "casserole",
            "loaf",
            "enchilada"
        ],
    "relaxed_siblings":
        [
            {
                "subject": "eggplant",
                "predicate": "ReceivesAction",
                "object": "baked in the oven"
            },
            
            {
                "subject": "chutney",
                "predicate": "ReceivesAction",
                "object": "cooked"
            },
            
            {
                "subject": "polenta",
                "predicate": "ReceivesAction",
                "object": "cooked"
            },
            
            {
                "subject": "kohlrabi",
                "predicate": "ReceivesAction",
                "object": "cooked"
            },
            
            {
                "subject": "couscous",
                "predicate": "ReceivesAction",
                "object": "cooked"
            }
        ], 
    "siblings": 
        [   
            {
                "side dish": ["loaf","pasta", "casserole", "couscous", "enchilada", "eggplant", "polenta", "kohlrabi", "chutney", "flatbread", "samosa"]
            }
        ]
}

Data produced from external methods (refer to CIKM'22 paper for details):

NegatER-theta, NegatER-nabla (part1), NegatER-nabla (part2), Quasimodo-neg, GPT-3-neg.