Towards Knowledgeable Foundation Models

@ ACL 2026 Workshop

San Diego, California, United States

Call for Papers

Towards Knowledgeable Foundation Models

Knowledge has been an important pre-requisite for a variety of AI applications, and is typically sourced from either structured knowledge sources such as knowledge bases and dictionaries or unstructured knowledge sources such as Wikipedia documents.

More recently, researchers have discovered that language models already possess a significant amount of knowledge through pre-training: LLMs can be used to generate commonsense knowledge and factual knowledge context for question answering. While the results are encouraging, there are still lingering questions:

  • Where does this knowledge come from?
  • How much do language models know?
  • Is this knowledge reliable?
  • If some knowledge is wrong, can we fix it?

This workshop examines the lifecycle of knowledge within language models:

  • (1) the emergence of knowledge through language model pre-training;
  • (2) injection of external knowledge;
  • (3) the updating and modification of knowledge;
  • (4) probing and generation of knowledge.

This is the 4th workshop for Knowledgeable Foundation Model workshop. The previous workshops were hosted at KnowFM@ACL2025, KnowFM@AAAI2025, and KnowLM@ACL2024.

Call for Papers

Knowledge has been an important prerequisite for various NLP applications and is typically derived from either structured knowledge sources such as knowledge bases and dictionaries or unstructured knowledge sources such as Wikipedia documents and news articles.

It is known that language models already possess a significant amount of knowledge through pre-training: LLMs can be used to generate commonsense knowledge and factual knowledge when prompted to do so. However, beyond the surface, there are still many lingering questions such as “where the knowledge comes from”, “how do we quantify the amount of knowledge”, “is the knowledge reliable (and do LMs themselves know)”, “how can we augment LMs with domain-specific knowledge”, “how can we revise knowledge without hurting the reasoning abilities of LMs” and “how can we leverage knowledge to assist the self-correction of LMs”.

In this workshop, we want to bring together researchers who focus on different stages and different aspects (structured knowledge, unstructured knowledge, and knowledge acquired from LMs themselves) of the knowledge lifecycle to discuss the role of knowledge in the era of large language models.

Submission Topics

We welcome submissions on all topics related to knowledgable LMs, including:

  • Analysis of knowledge within LMs: how much they know and where that knowledge is from.
  • Enhancing LMs with existing knowledge sources (knowledge graphs, domain-specific databases, manuals, and rules, etc, either during training or inference).
  • Analyzing and improving RAG (retrieval-augmented generation) systems
  • Updating and editing knowledge in LMs.
  • Knowledge extraction and generation using LMs
  • Evaluation of knowledge utilization (faithfulness, truthfulness) by LMs.
  • Identification and mitigation of LM hallucinations, factual error correction

Paper Awards

We will also announce a Best Paper Award and an Outstanding Paper Award at our workshop.

Submission Instructions

We welcome two types of papers: regular workshop papers and non-archival submissions. Only regular workshop papers will be included in the workshop proceedings. Review process will be double-blind. All submissions should be in PDF format following the ACL template (8 pages for main text) and made through OpenReview submission portal (https://openreview.net/group?id=aclweb.org/ACL/2026/Workshop/KnowFM)

Important Dates

All deadlines are 23:59pm UTC-12h ("Anywhere on Earth").

Submission Deadline
April 1st, 2026
Decision Notifications
April 20th, 2026
Camera-Ready Deadline
May 1st, 2026
Workshop Date
July 3, 2026

Tentative Speakers

Avatar

Yoav Artzi

Cornell University

Avatar

Sewon Min

UC Berkeley

Avatar

Eunsol Choi

UT Austin

Avatar

Mohit Iyyer

University of Maryland

Avatar

Danqi Chen

Princeton University

Avatar

Yulia Tsvetkov

University of Washington

Avatar

Luna Dong

Meta

Organizers

Organizing Committee

Avatar

Canyu Chen

Northwestern University

Avatar

Yuji Zhang

University of Illinois Urbana-Champaign

Avatar

Zoey Sha Li

Amazon

Avatar

Zihan Wang

Northwestern University

Avatar

Qineng Wang

Northwestern University

Avatar

Priyanka Kargupta

University of Illinois Urbana-Champaign

Avatar

Sara Vera Marjanović

University of Copenhagen

Avatar

Jeff Z. Pan

University of Edinburgh

Avatar

Manling Li

Northwestern University

Advising Committee

Avatar

Heng Ji

University of Illinois Urbana-Champaign

Avatar

Mohit Bansal

University of North Carolina at Chapel Hill

Avatar

Isabelle Augenstein

University of Copenhagen

Avatar

Jiawei Han

University of Illinois Urbana-Champaign

Contact

Please email know-fm-acl26@googlegroups.com if you have any questions.