Welcome to Jiateng Home Page !

Hi there ! Welcome to my personal homepage!

I'm Jiateng Liu (刘嘉腾), a second-year MS student at the University of Illinois Urbana-Champaign (UIUC) under the guidance of Prof. Heng Ji. Previously, I earned my bachelor's degree in Computer Science from Zhejiang University.

I'm interested in the Science of NLP while keeps an eye on its Applications.

Science: I am deeply committed to the rigorous study of large models. I strive to enhance their interpretability and effectiveness. As we move into the next decade, I am excited to contribute to the evolution of AI—developing systems that not only perform complex tasks but are also interpretable and genuinely beneficial to humanity.

Applications: I believe that AI must transcend theoretical prowess to substantial improvements in daily life. I keep an eye on the applications of NLP in Science and looking forward to the next 'ChatGPT' moment.

profile photo

Research

My research interests are organized into three key areas:

1. The Physics and Interpretability of Language Models: I am intrigued by the underlying "physics" of Large Language Models (LLMs), focusing on how they absorb knowledge, process information, and make predictions. A major part of my research is improving the efficiency and accuracy of updating pretrained LLMs, ensuring that their knowledge remains robust, consistent, and up-to-date while minimizing costs and time. Additionally, I am deeply interested in the interpretability of LLMs—understanding how they represent and manipulate information internally to provide more transparent and trustworthy AI systems.

2. Multi-Modal Representation Learning and Multi-Media Foundational Models: My work in this area centers on designing new paradigms for multi-modal interactions and deriving empirical scaling laws for multi-modal foundational models. I focus particularly on video understanding and generation, aiming to seamlessly integrate language, visual, and temporal modalities. A key aspect of my research is exploring how multimodal interactions are learned, including the mechanisms by which information flows and aligns across modalities to create cohesive representations. I also investigate protocols for efficient and complete multimodal interactions, ensuring that each modality contributes optimally to the task at hand while minimizing redundancy and maximizing interpretability. My goal is to develop systems capable of effectively analyzing and generating content across diverse modalities, with applications in tasks such as video captioning, video-based reasoning, and video synthesis, thereby advancing the capabilities of multi-modal AI.

3. NLP and Multi-Disciplinary Science for Social Good: I use NLP to tackle challenges in social and scientific domains. For example, I analyze social media data to study the spread of misinformation and its societal impacts, helping to develop tools that counteract disinformation. Additionally, I explore how NLP can contribute to scientific advancements, such as facilitating drug design and interpreting neural signals. These interdisciplinary applications demonstrate the transformative potential of NLP in both improving lives and advancing science.

For more on my future vision and projects, feel free to visit my blog or reach out for a discussion.

I am currently seeking a Ph.D. position starting in Fall 2025!

Publications

[1] EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation [Paper] (EMNLP 2024)
Jiateng Liu* , Pengfei Yu*, Yuji Zhang, Sha Li, Zixuan Zhang, Ruhi Sarikaya, Kevin Small, Heng Ji

[2] A Language First Approach for Procedure Planning [Paper] (ACL 2023 Findings)
Jiateng Liu*, Sha Li*, Zhenhailong Wang, Manling Li, Heng Ji

[3] PropaInsight: Toward Deeper Understanding of Propaganda in Terms of Techniques, Appeals, and Intent [Paper] (COLING 2025)
Jiateng Liu*, Lin Ai*, Zizhou Liu, Payam Karisani, Zheng Hui, May Fung, Preslav Nakov, Julia Hirschberg, Heng Ji

[4] MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback [Paper] [Code]
(ICLR 2024) Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji

[5] If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents [Paper] (ICLR 2024 Workshop, submitting to ACM Computing Survey)
Ke Yang*, Jiateng Liu*, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, Chengxiang Zhai

[6] CurveCloudNet: Processing Point Clouds with 1D Structure [Paper] (CVPR 2024)
Colton Stearns, Jiateng Liu, Davis Rempe, Despoina Paschalidou, Jeong Joon Park, Sebastien Mascha, Leonidas J. Guibas

[7] Knowledge overshadowing causes amalgamated hallucination in large language models [Paper]
Yuji Zhang, Sha Li, Jiateng Liu, Pengfei Yu, Yi R Fung, Jing Li, Manling Li, Heng Ji

Research Intern experience

Nov 2022 - March 2023 [Stanford University]

  • Research Intern.
  • Mentor: Prof. Leonidas Guibas, Prof. Yanchao Yang, Colton Stearns
  • Focus: 3D reconstruction with curve data

June 2022 - Dec 2022 [University of Illinois Urbana Champaign]

  • Research Intern.
  • Mentor: Prof. Heng Ji, Sha Li, Manling Li
  • Focus: Language side approaches for Procedure Planning

June 2022 - Dec 2022 [Zhejiang University]

  • Research Intern.
  • Mentor: Prof. Mingli Song, Prof. Zunlei Feng, Ya Zhao
  • Focus: Make Transformers efficient

Sep 2021 - Dec 2021 [Zhejiang University]

  • Research Intern.
  • Mentor: Prof. Zicheng Liu, Prof. Mingli Song
  • Focus: 3D Human mesh reconstruction

Assistanship

2023.8 - 2023.12
Teaching assistant for CS440 at UIUC.

2023.12 - now
Research assistant of Prof. Heng Ji.