Workshop on Common Sense Knowledge Graphs (CSKGs)

February 8

Held virtually in conjunction with AAAI’21

Live workshop recording:

Commonsense knowledge graphs (CSKGs) are sources of background knowledge that are expected to contribute to downstream tasks like question answering, robot manipulation, and planning. The knowledge covered in CSKGs varies greatly, spanning procedural, conceptual, and syntactic knowledge, among others. CSKGs come in a wider variety of forms compared to traditional knowledge graphs, ranging from (semi-)structured knowledge graphs, such as ConceptNet, ATOMIC, and FrameNet, to the recent idea to use language models as knowledge graphs. As a consequence, traditional methods of integration and usage of knowledge graphs might need to be expanded when dealing with CSKGs. Understanding how to best integrate and represent CSKGs, leverage them on a downstream task, and tailor their knowledge to the particularities of the task, are open challenges today. The workshop on CSKGs addresses these challenges, by focusing on the creation of commonsense knowledge graphs and their usage on downstream commonsense reasoning tasks.

Confirmed Keynote Speakers

  • Yejin Choi
    University of Washington & AI2
  • Joshua Tenenbaum
    MIT

Confirmed Panelists

  • David Ferrucci
    Elemental Cognition
  • Shih-Fu Chang
    Colombia University
  • Łukasz Kaiser
    Google Brain & CNRS
  • Tony Veale
    University College Dublin

Program

All times are given in Pacific Standard Time (PST).

Time

Content

08:30 - 08:45

Welcome (Filip Ilievski)

08:45 - 09:45

Keynote by Yejin Choi (chair: Pedro Szekely) slides

09:45 - 10:35

Paper session 1: Collecting and representing commonsense knowledge (chair: Alessandro Oltramari)

Anurag Acharya, Kartik Talamadupula and Mark Finlayson. Towards An Atlas of Cultural Commonsense for Machine Reasoning (long)

Zhicheng Liang and Deborah L. McGuinness. Commonsense Knowledge Mining from Term Definitions (short)

Boulos El Asmar, Syrine Chelly and Michael Färber. AWARE: An Ontology for Situational Awareness of Autonomous Vehicles in Manufacturing (long)

10:35 - 10:55

Break

10:55 - 11:45

Paper session 2: KGs for natural language tasks (chair: Deborah McGuinness)

Yikang Li, Pulkit Goel, Varsha Kuppur Rajendra, Har Simrat Singh, Jonathan Francis, Kaixin Ma, Eric Nyberg and Alessandro Oltramari. Lexically-constrained Text Generation through Commonsense Knowledge Extraction and Injection (long)

Yasaman Razeghi, Robert Logan and Sameer Singh. Deriving Behavioral Tests from Common Sense Knowledge Graphs (short)

Litton Jose Kurisinkel and Nancy F Chen. Graph To Coherent Text: Passage Generation from Knowledge Graphs by Exploiting Edge Representations in Sentential Contexts (long)

11:45 - 12:45

Panel with David Ferrucci, Shih-Fu Chang, Łukasz Kaiser, and Tony Veale: Are language models enough? (Moderator: Filip Ilievski)

12:45 - 13:05

Break

13:05 - 14:05

Keynote by Joshua Tenenbaum (chair: Deborah McGuinness) slides

14:05 - 14:55

Paper session 3: Common sense inference (chair: Pedro Szekely)

Alessio Sarullo and Tingting Mu. Zero-Shot Human-Object Interaction Recognition via Affordance Graphs (long)

Henrique Santos, Minor Gordon, Zhicheng Liang, Gretchen Forbush and Deborah L. McGuinness. Exploring and Analyzing Machine Commonsense Benchmarks (short)

Kwabena Nuamah, Alan Bundy and Yantao Jia. A Context Mechanism for an Inference-based Question Answering System (long)

14:55 - 15:30

Discussion and Wrap-up (led by Alessandro Oltramari)

15:30 - 16:30

End with optional virtual happy hour

Panel: Are language models enough?

(Deep) Language models are so popular these days that it’s becoming harder to find a scientific publication in the field of Natural Language Processing (NLP) that doesn’t use any. Popularity is well-deserved: BERT and its descendants (“BERTology”), GPT-3, T5, etc. have been quite phenomenal in improving the state of the art in a wide range of NLP tasks, from semantic parsing to question answering and text-generation. But “are language models enough to understand meaning as broadly and deeply as humans do?” Are the trivial errors that language models (still profusely) make just an indication that we need larger and better curated corpora, and proportionally higher-capacity models? Or are these errors clues that language models - no matter how many billion words have been used to train them - can only capture surface-level meanings, i.e. semantic features that are exhibited by training data? We invited top experts in the field to help us shed some light on these questions.

Accepted Papers

Topics

Topics of interest include, but are not limited to:

  • Creation/extraction of new CSKGs
  • Integration of existing CSKGs
  • Exploration of CSKGs
  • Impact of CSKGs on downstream tasks
  • Methods of including CSKG knowledge in downstream tasks
  • Probing for knowledge needs in downstream tasks
  • Evaluation data/metrics relevant for CSKGs
  • Identifying and/or filling gaps in CSKGs

Key Dates

November 15: Workshop submissions due (extended)
November 30: Notifications sent to authors
December 18: Camera-ready paper versions due
January 15: Release of final workshop schedule
February 8: Workshop

The deadline time is 23:59 anywhere on Earth.

Submissions

We welcome submissions of long (max. 8 pages), short (max. 4 pages), and position (max. 4 pages) papers describing new, previously unpublished research in this field. The page limits are including the references. Any number of pages with appendices can be added to the paper; however, the reviewers might not inspect these in detail. Submissions must be formatted in the AAAI submission format. All submissions should be done electronically via EasyChair: https://easychair.org/conferences/?conf=cskgsaaai21.

Organizers

  • Filip Ilievski
    USC/ISI
  • Alessandro Oltramari
    Bosch Research and Technology Center (Pittsburgh)
  • Deborah McGuinness
    Rensselaer Polytechnic Institute
  • Pedro Szekely
    USC/ISI

Program Committee

  • Marjorie Friedman (USC Information Sciences Institute, Massachusetts, USA)
  • Aldo Gangemi (University of Bologna and ISTC, National Research Council, Italy)
  • Henry Lieberman (MIT, Massachusetts, USA)
  • Robert Logan (University of California Irvine, California, USA)
  • Roberto Navigli (Sapienza University of Rome, Italy)
  • Valentina Presutti (ISTC, National Research Council, Italy)
  • Simon Razniewski (Max Planck Institute for Informatics, Germany)
  • German Rigau (University of the Basque Country, Spain)
  • Daniel Schwabe (Pontificia Universidade Católica (PUC), Rio de Janeiro, Brazil)
  • Niket Tandon (Allen Institute for AI, Washington, USA)
  • Piek Vossen (VU Amsterdam, The Netherlands)