Towards Knowledgeable Language Models

@ ACL 2024 Workshop

August 12–17, 2024 hybrid in Bangkok, Thailand & remote

Call for Papers Schedule

Towards Knowledgeable Language Models

Knowledge has been an important pre-requisite for a variety of NLP applications, and is typically sourced from either structured knowledge sources such as knowledge bases and dictionaries or unstructured knowledge sources such as Wikipedia documents.

More recently, researchers have discovered that language models already possess a significant amount of knowledge through pre-training: LLMs can be used to generate commonsense knowledge and factual knowledge context for question answering. While the results are encouraging, there are still lingering questions:

  • Where does this knowledge come from?
  • How much do language models know?
  • Is this knowledge reliable?
  • If some knowledge is wrong, can we fix it?

This workshop examines the lifecycle of knowledge within language models:

  • (1) the emergence of knowledge through language model pre-training;
  • (2) injection of external knowledge;
  • (3) the updating and modification of knowledge;
  • (4) probing and generation of knowledge.

Call for Papers

Knowledge has been an important prerequisite for various NLP applications and is typically derived from either structured knowledge sources such as knowledge bases and dictionaries or unstructured knowledge sources such as Wikipedia documents and news articles.

It is known that language models already possess a significant amount of knowledge through pre-training: LLMs can be used to generate commonsense knowledge and factual knowledge when prompted to do so. However, beyond the surface, there are still many lingering questions such as “where the knowledge comes from”, “how do we quantify the amount of knowledge”, “is the knowledge reliable (and do LMs themselves know)”, “how can we augment LMs with domain-specific knowledge”, “how can we revise knowledge without hurting the reasoning abilities of LMs” and “how can we leverage knowledge to assist the self-correction of LMs”.

In this workshop, we want to bring together researchers who focus on different stages and different aspects (structured knowledge, unstructured knowledge, and knowledge acquired from LMs themselves) of the knowledge lifecycle to discuss the role of knowledge in the era of large language models.

Submission Topics

We welcome long (8 page) and short (4 page) paper submissions on all topics related to knowledgable LMs, including:

Analysis of knowledge within LMs: how much they know and where that knowledge is from. Enhancing LMs with existing knowledge sources (knowledge graphs, domain-specific databases, manuals, and rules, etc, either during training or inference). Analyzing and improving RAG (retrieval-augmented generation) systems Updating and editing knowledge in LMs. Knowledge extraction and generation using LMs Evaluation of knowledge utilization (faithfulness, truthfulness) by LMs. Identification and mitigation of LM hallucinations, factual error correction

We will also announce a Best Paper Award at our workshop sponsored by Amazon.

Submission Instructions

We welcome two types of papers: regular workshop papers and non-archival submissions. Only regular workshop papers will be included in the workshop proceedings. All submissions should be in PDF format following the ACL template and made through OpenReview submission portal (https://openreview.net/group?id=aclweb.org/ACL/2024/Workshop/KnowledgeLM)

Important Dates (Tentative)

All deadlines are 11:59 pm UTC-12h (“Anywhere on Earth”).

Submission Deadline May 24 2024
Decision Notifications June 22 2024
Camera-Ready Deadline July 6 2024
Workshop Date 16 August 2024

Speakers

Avatar

Yang Liu

Amazon

Avatar

Ed Hovy

CMU

Schedule

Time Program
9:00-9:10 Opening remarks
9:10-10:40 Three keynote speeches (30 min each)
10:40-11:30 Panel discussion
11:30-12:30 Poster session
12:30-13:30 Student mentoring lunch session (pair senior researchers with junior researchers)
13:30-15:00 Three keynote speeches (30 min each)
15:00-15:50 Panel discussion
15:50-16:50 Oral paper session (12 min talk + 3 min QA)
16:50-17:20 Challenge track spotlight session (6 min talk)
17:20-17:30 Closing remarks

Organization

Organizing Committee

Avatar

Zoey Sha Li

University of Illinois Urbana-Champaign

Avatar

Manling Li

Northwestern University

Avatar

Michael JQ Zhang

University of Texas at Austin

Avatar

Eunsol Choi

University of Texas at Austin

Avatar

Mor Geva

Google DeepMind, Tel Aviv University

Avatar

Peter Hase

University of North Carolina at Chapel Hill

Advising Committee

Avatar

Heng Ji

University of Illinois Urbana-Champaign, Amazon Scholar

Avatar

Mohit Bansal

University of North Carolina at Chapel Hill

Avatar

Preslav Nakov

MBZUAI

Avatar

Meng Jiang

University of Notre Dame

Avatar

Jiawei Han

University of Illinois Urbana-Champaign

Contact

Please email knowledgeable-lm@googlegroups.com if you have any questions.

Support

Avatar

Amazon

Amazon