Common Sense Knowledge Graphs (CSKGs)


Commonsense reasoning is an important aspect of building robust AI systems and is receiving significant attention in the natural language understanding, computer vision, and knowledge graphs communities. At present, a number of valuable commonsense knowledge sources exist, with different foci, strengths and weaknesses. Our tutorial will survey the most important commonsense knowledge resources, and introduce a new commonsense knowledge graph (CSKG) to integrate several existing resources. The tutorial will also introduce several tools to work with CSKG including query mechanisms, knowledge graph embeddings, and existing methods that answer commonsense questions with knowledge graphs.

[Tutorial recording]

Tutorial Program

Our tutorial will provide a comprehensive overview of both top-down and bottom-up commonsense knowledge graphs. We will introduce each graph in turn, and contrast their design principles. Next, we will discuss the benefits and challenges for their integration and consolidation in a single CSKG, followed by our approach to perform this consolidation. Finally, we will discuss how to compute embeddings and ground text to such CSKGs, as well as how such CSKGs are used to reason on downstream question answering tasks. The tutorial will include demos based on publicly available resources and Jupyter Notebooks.


All times are given in Pacific Standard Time (PST).





08:00 PST

1h 50min

Part I - Review of CSKGs

[.pptx]   [.pdf]

15 min

Introduction to commonsense knowledge (slides) - Pedro

25 min

Review of top-down commonsense knowledge graphs (slides) - Mayank

70 min

Review of bottom-up commonsense knowledge graphs (slides+demo) - Mayank, Filip, Pedro

10 min


10:00 PST

45 min

Part II - Integration and analysis

[.pptx]   [.pdf]

35 min

Consolidating commonsense graphs (slides) - Filip

10 min

Consolidating commonsense graphs (demo) - Pedro

10 min


10:55 PST

1h 05min

Part III - Downstream use of CSKGs

[.pptx]   [.pdf]

35 min

Answering questions with CSKGs (slides) - Filip

15 min

CSKG embeddings and grounding (demo) - Filip

15 min

Wrap-up (slides) - Mayank

Learning Outcomes:

  1. Familiarization with the state-of-the-art knowledge sources that provide commonsense knowledge
  2. Understanding how these sources can be integrated into a single commonsense knowledge graph
  3. Code for analysis of a consolidated CSKG, intrinsic operations (e.g., computation of embeddings), and application of the CSKG on standing commonsense tasks in a natural language form.

Presentation Style style will be informal and very hands-on. We will avoid representation specifics and terminology used by the individual graphs to the best extent possible (i.e. without losing rigor, or over-simplifying). Our slides will focus on visual intuition, use actual examples, present lessons learned from our latest implementations of algorithms and commonsense reasoning frameworks, and will be accompanied by demos and hands-on activities that participants will be able to do without requiring extensive platform-dependent setup. We will permit questions and interactions throughout the tutorial. All three of us will be mostly present throughout the tutorial but will be individually presenting our sections.

Background & Requirements

Capturing, representing, and leveraging commonsense knowledge has been a paramount for AI since its early days, cf. (McCarthy, 1960). In the light of the modern large (commonsense) knowledge graphs and various neural advancements, the recently introduced DARPA Machine Common Sense program represents a new effort to understand commonsense knowledge through question-answering evaluation benchmarks. Intuitively, graphs of (commonsense) knowledge are essential in such tasks in order to inject background knowledge that humans possess and apply, but machines cannot access or distill directly in communication.

Our team has been working on several aspects of commonsense knowledge found in knowledge graphs. Firstly, we have been integrating a number of knowledge graphs in a single commonsense knowledge graph, including: ConceptNet, WordNet, Visual Genome, FrameNet, ATOMIC, WebChild, Wikidata, and Cyc. Secondly, we have been building software to perform intrinsic operations on the consolidated CSKG, such as generating embeddings and performing knowledge graph completion. Thirdly, we have a framework that allows us to integrate (parts of) our CSKG into a reasoning system that answers questions phrased in natural language. These questions come from several commonsense evaluation datasets, focused on social, physical, visual, and situational reasoning.

It is important for the semantic web community to keep pace with this new ‘wave’ of commonsense knowledge representation and reasoning for downstream applications. Aspects that we seek to address during through our presentation and activities are: what is a commonsense knowledge graph? Which kind of knowledge is captured by existing CSKGs? What are their major strengths and weaknesses? How can they be integrated into a graph that is more than ‘a sum of its parts’? How can this graph be refined/enriched with more semantics or missing information? How can it be applied on downstream applications in natural language understanding? Our tutorial will be very practical, and will answer these questions in a focused, foundational manner that all Semantic Web researchers will be able to easily follow.

Prior knowledge expected from participants (beyond fairly basic Python 3 skills, and familiarity with Semantic Web concepts like RDF) will be minimal. Some knowledge of machine learning, including basic concepts like training, testing and validating, feature engineering etc. will be helpful but are not absolute prerequisites, as we will not go into advanced machine learning math or optimization. Additionally, where possible, we will introduce basic machine learning concepts so that everyone has an opportunity to follow along. Participants are not expected to have any knowledge of answering natural language commonsense questions.

We will be using our own computers for presenting demos and PowerPoint slides and only require equipment to facilitate such projection for an extended period of time (e.g., projector, table, power outlet). We (and also the participants) will require an internet/wifi connection to access the tutorial material. There are no audio elements to our presentation. All demos and hands-on activities will be doable on a reasonable laptop by interested participants. We will also bring extra USB storage devices with copies of code, programs and slides in case some participants did not download the material prior to the tutorial.

Expected Coverage

Tools and data: