MH2: Commonsense Knowledge Acquisition and Representation

AAAI'21 Tutorial
February 3, 2021

Live tutorial recording:

The tutorial will consist of four main components, each covered by one of the presenters, followed by a discussion session. We will start by introducing theories on an axiomatization of commonsense knowledge. Next, we will cover efforts to harmonize nodes and relations across heterogeneous commonsense sources, as well as the impact of such consolidation on downstream reasoning tasks. Thirdly, we will discuss how commonsense knowledge can be automatically extracted from text, as well as quantitatively and qualitatively contextualized. Then, we will discuss how large-scale models, such as BERT, GPT-2, and T5, learn to implicitly represent an abundance of commonsense knowledge from reading the web. Also, how this knowledge can be extracted through carefully-designed language prompting, or through fine-tuning on knowledge graph tuples. We will conclude the tutorial with a discussion of the way forward, and propose to combine language models, knowledge graphs, and axiomatization in the next-generation commonsense reasoning techniques. Prior knowledge expected from participants will be minimal. Some knowledge of machine learning and language modeling will be helpful, but not compulsory: we will introduce relevant machine learning concepts so that everyone has an opportunity to follow along.

Tutorial Program

All times are tentative and given in Pacific Standard Time (PST).





08:30 - 08:40

Introduction to commonsense knowledge

Filip Ilievski


08:40 - 09:05

Axiomatization of commonsense knowledge

Mayank Kejriwal


09:05 - 09:45

Consolidating Commonsense Knowledge

Filip Ilievski


09:45 - 10:00


10:00 - 10:45

Extracting and contextualizing commonsense knowledge

Simon Razniewski


10:45 - 11:30

Language models, QA, and evaluation challenges

Antoine Bosselut


11:30 - 11:45

Way forward: KGs+LMs+axioms?

Filip Ilievski


Background & Requirements

Commonsense reasoning has been recognized as essential for building more advanced 'general' AI systems that have human-like capabilities and reasoning ability, even when facing uncertain, implicit (or potentially contradictory) information. Recognizing its importance, researchers in several communities have increasingly engaged in researching and evaluating commonsense reasoning on tasks pertaining to question answering and abductive reasoning. Unlike other 'pure' or logical reasoning tasks where the knowledge base and inference axioms can be separated (at least in principle), knowledge is an important aspect of commonsense reasoning. This knowledge may be acquired over large natural language (and even visual) corpora, as transformer-based models such as BERT (Devlin et al., 2018) and GPT (Radford et al., 2019) have attempted to do, or through 'knowledge graphs' of concepts, relations and events constructed using natural language processing and crowdsourcing techniques. Once acquired, the knowledge must also be represented appropriately to support human-like reasoning and question answering. While language models favor continuous vector-like representations, knowledge graphs are more discrete. In this tutorial, we present a comprehensive overview of commonsense knowledge acquisition and representation techniques, based both on classic research as well as modern advances in the Natural Language Processing and Semantic Web communities.

Prior knowledge expected from participants will be minimal. Some knowledge of machine learning, including basic concepts like training, testing and validating, feature engineering etc. will be helpful but are not absolute prerequisites, as we will not go into advanced machine learning math or optimization. Additionally, where possible, we will introduce basic machine learning concepts so that everyone has an opportunity to follow along. Participants are not expected to have any knowledge of answering natural language commonsense questions, nor of the state-of-the-art knowledge sources or axiomatization theories.