Towards Knowledgeable Foundation Models

@ ACL 2025 Workshop

Aug 1, 2025 in Vienna, Austria

Call for Papers Schedule

Towards Knowledgeable Foundation Models

Knowledge has been an important pre-requisite for a variety of AI applications, and is typically sourced from either structured knowledge sources such as knowledge bases and dictionaries or unstructured knowledge sources such as Wikipedia documents.

More recently, researchers have discovered that language models already possess a significant amount of knowledge through pre-training: LLMs can be used to generate commonsense knowledge and factual knowledge context for question answering. While the results are encouraging, there are still lingering questions:

  • Where does this knowledge come from?
  • How much do language models know?
  • Is this knowledge reliable?
  • If some knowledge is wrong, can we fix it?

This workshop examines the lifecycle of knowledge within language models:

  • (1) the emergence of knowledge through language model pre-training;
  • (2) injection of external knowledge;
  • (3) the updating and modification of knowledge;
  • (4) probing and generation of knowledge.

This is the 3rd workshop for Knowledgeable Foundation Model workshop. The previous workshop was hosted at KnowFM@AAAI2025 and KnowLM@ACL2024.

Call for Papers

Knowledge has been an important prerequisite for various NLP applications and is typically derived from either structured knowledge sources such as knowledge bases and dictionaries or unstructured knowledge sources such as Wikipedia documents and news articles.

It is known that language models already possess a significant amount of knowledge through pre-training: LLMs can be used to generate commonsense knowledge and factual knowledge when prompted to do so. However, beyond the surface, there are still many lingering questions such as “where the knowledge comes from”, “how do we quantify the amount of knowledge”, “is the knowledge reliable (and do LMs themselves know)”, “how can we augment LMs with domain-specific knowledge”, “how can we revise knowledge without hurting the reasoning abilities of LMs” and “how can we leverage knowledge to assist the self-correction of LMs”.

In this workshop, we want to bring together researchers who focus on different stages and different aspects (structured knowledge, unstructured knowledge, and knowledge acquired from LMs themselves) of the knowledge lifecycle to discuss the role of knowledge in the era of large language models.

Submission Topics

We welcome submissions on all topics related to knowledgable LMs, including:

  • Analysis of knowledge within LMs: how much they know and where that knowledge is from.
  • Enhancing LMs with existing knowledge sources (knowledge graphs, domain-specific databases, manuals, and rules, etc, either during training or inference).
  • Analyzing and improving RAG (retrieval-augmented generation) systems
  • Updating and editing knowledge in LMs.
  • Knowledge extraction and generation using LMs
  • Evaluation of knowledge utilization (faithfulness, truthfulness) by LMs.
  • Identification and mitigation of LM hallucinations, factual error correction

We will also announce a Best Paper Award at our workshop sponsored by Amazon.

Submission Instructions

We welcome two types of papers: regular workshop papers and non-archival submissions. Only regular workshop papers will be included in the workshop proceedings. Review process will be double-blind. All submissions should be in PDF format following the ACL template and made through OpenReview submission portal (https://openreview.net/group?id=aclweb.org/ACL/2025/Workshop/KnowFM)

Important Dates

All deadlines are 11:59 pm UTC-12h (“Anywhere on Earth”).

Submission Deadline May 10th 2025 (23:59pm AoE)
Decision Notifications Jun 10th 2025 (23:59pm AoE)
Camera-Ready Deadline Jun 18th 2025 (23:59pm AoE)
Workshop Date 1st Aug 2025

Speakers

Schedule

Time Program
09:00-09:05 Opening Remarks
09:05-09:50 Keynote Speech
09:50-10:35 Keynote Speech
10:35-11:00 Coffee Break
11:00-11:45 Keynote Speech
11:45-12:00 Oral Presentation
12:00-12:15 Oral Presentation
12:15-12:30 Oral Presentation
12:30-12:35 Best Paper Announcement
12:35-14:00 Lunch Break (Student Mentoring Session + Poster Session)
14:00-14:45 Keynote Speech
14:45-15:30 Keynote Speech
15:30-16:00 Coffee Break
16:00-16:45 Keynote Speech
16:45-16:50 Lightning Talk
16:50-16:55 Lightning Talk
16:55-17:00 Lightning Talk
17:00-17:05 Lightning Talk
17:05-17:10 Lightning Talk
17:10-17:30 QA & Closing Remarks

Accepted Papers

Coming soon!

Organization

Organizing Committee

Avatar

Manling Li

Northwestern University

Avatar

Zoey Sha Li

Amazon

Avatar

Mor Geva

Google DeepMind, Tel Aviv University

Avatar

Xiaozhi Wang

Tsinghua, UIUC

Avatar

Chi Han

University of Illinois Urbana-Champaign

Avatar

Shangbin Feng

University of Washington

Avatar

Silin Gao

EPFL

Advising Committee

Avatar

Heng Ji

University of Illinois Urbana-Champaign, Amazon Scholar

Avatar

Isabelle Augenstein

University of Copenhagen

Avatar

Mohit Bansal

University of North Carolina at Chapel Hill

Contact

Please email know-fm-acl25@googlegroups.com if you have any questions.

Support

Avatar

Amazon

Amazon