profile photo profile photo hover

Hi, I'm Jiateng Liu.

I'm Jiateng Liu (刘嘉腾), a second-year MS student at the University of Illinois Urbana-Champaign (UIUC) under the guidance of Prof. Heng Ji. Previously, I earned my bachelor's degree in Computer Science from Zhejiang University.

I'm interested in the Science of NLP while keeps an eye on its Applications.

Science: I am deeply committed to the rigorous study of large models. I strive to enhance their interpretability and effectiveness. As we move into the next decade, I am excited to contribute to the evolution of AI—developing systems that not only perform complex tasks but are also interpretable and genuinely beneficial to humanity.

Applications: I believe that AI must transcend theoretical prowess to substantial improvements in daily life. I keep an eye on the applications of NLP in Science and looking forward to the next 'ChatGPT' moment.

Research

My research interests are organized into three key areas:

The Physics and Interpretability of Language Models
Language Models Interpretability

The Physics and Interpretability of Language Models

I am intrigued by the underlying "physics" of Large Language Models (LLMs), focusing on how they absorb knowledge, process information, and make predictions. A major part of my research is improving the efficiency and accuracy of updating pretrained LLMs, ensuring that their knowledge remains robust, consistent, and up-to-date while minimizing costs and time.

Additionally, I am deeply interested in the interpretability of LLMs—understanding how they represent and manipulate information internally to provide more transparent and trustworthy AI systems.

Multi-Modal Representation Learning
Multimodal AI Video Understanding

Multi-Modal Representation Learning and Multi-Media Foundational Models

My work in this area centers on designing new paradigms for multi-modal interactions and deriving empirical scaling laws for multi-modal foundational models. I focus particularly on video understanding and generation, aiming to seamlessly integrate language, visual, and temporal modalities.

A key aspect of my research is exploring how multimodal interactions are learned, including the mechanisms by which information flows and aligns across modalities to create cohesive representations. I also investigate protocols for efficient and complete multimodal interactions, ensuring that each modality contributes optimally to the task at hand while minimizing redundancy and maximizing interpretability.

My goal is to develop systems capable of effectively analyzing and generating content across diverse modalities, with applications in tasks such as video captioning, video-based reasoning, and video synthesis, thereby advancing the capabilities of multi-modal AI.

NLP for Social Good
Social Good Interdisciplinary

NLP and Multi-Disciplinary Science for Social Good

I use NLP to tackle challenges in social and scientific domains. For example, I analyze social media data to study the spread of misinformation and its societal impacts, helping to develop tools that counteract disinformation.

Additionally, I explore how NLP can contribute to scientific advancements, such as facilitating drug design and interpreting neural signals. These interdisciplinary applications demonstrate the transformative potential of NLP in both improving lives and advancing science.

Publications

For the full list of publications, please visit my Google Scholar page.
EVEDIT: Event-based Knowledge Editing
EMNLP-2024 Knowledge Editing

EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation

Jiateng Liu*, Pengfei Yu*, Yuji Zhang, Sha Li, Zixuan Zhang, Ruhi Sarikaya, Kevin Small, Heng Ji
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP-2024 Main Conference)
A Language First Approach for Procedure Planning
ACL-2023 Procedure Planning

A Language First Approach for Procedure Planning

Jiateng Liu*, Sha Li*, Zhenhailong Wang, Manling Li, Heng Ji
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023 Findings)
PropaInsight: Toward Deeper Understanding of Propaganda
COLING-2025 Propaganda Analysis

PropaInsight: Toward Deeper Understanding of Propaganda in Terms of Techniques, Appeals, and Intent

Jiateng Liu*, Lin Ai*, Zizhou Liu, Payam Karisani, Zheng Hui, May Fung, Preslav Nakov, Julia Hirschberg, Heng Ji
Proceedings of the 29th International Conference on Computational Linguistics (COLING 2025)
If LLM Is the Wizard, Then Code Is the Wand
ICLR-2024 Survey

If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

Ke Yang*, Jiateng Liu*, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, Chengxiang Zhai
ICLR 2024 Workshop (submitting to ACM Computing Survey)
MINT: Evaluating LLMs in Multi-turn Interaction
ICLR-2024 LLM Evaluation

MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback

Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji
Proceedings of the 12th International Conference on Learning Representations (ICLR 2024)
CurveCloudNet: Processing Point Clouds
CVPR-2024 3D Vision

CurveCloudNet: Processing Point Clouds with 1D Structure

Colton Stearns, Jiateng Liu, Davis Rempe, Despoina Paschalidou, Jeong Joon Park, Sebastien Mascha, Leonidas J. Guibas
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Knowledge overshadowing causes amalgamated hallucination
arXiv Hallucination

Knowledge overshadowing causes amalgamated hallucination in large language models

Yuji Zhang, Sha Li, Jiateng Liu, Pengfei Yu, Yi R Fung, Jing Li, Manling Li, Heng Ji
arXiv preprint

Research Experiences

Stanford University | Advisor: Prof. Leonidas Guibas, Prof. Yanchao Yang, Colton Stearns
Nov 2022 - March 2023
  • 3D reconstruction with curve data
University of Illinois Urbana Champaign | Advisor: Prof. Heng Ji, Sha Li, Manling Li
June 2022 - Dec 2022
  • Language side approaches for Procedure Planning
Zhejiang University | Advisor: Prof. Mingli Song, Prof. Zunlei Feng, Ya Zhao
June 2022 - Dec 2022
  • Make Transformers efficient
Zhejiang University | Advisor: Prof. Zicheng Liu, Prof. Mingli Song
Sep 2021 - Dec 2021
  • 3D Human mesh reconstruction

Work Experience

Applied Scientist Intern | Company: Amazon
May 2025 - Aug 2025 | Seattle, WA
  • Applied Scientist Intern at Amazon Alexa AI team

Assistantship

Teaching Assistant | Course: CS440 at UIUC
Aug 2023 - Dec 2023
  • Teaching assistant for CS440 at UIUC
Research Assistant | Advisor: Prof. Heng Ji
Dec 2023 - Present
  • Research assistant of Prof. Heng Ji
×