I am a second-year Ph.D. candidate at HKU, supervised by Lingpeng Kong. I obtained my bachelor’s and master’s degrees from the Department of Computer Science at Fudan University, advised by Prof. Xipeng Qiu. My current research focuses on building effective long-context LLMs and scaling reinforcement learning with LLMs.
Selected Publications
POLARIS: A Post-Training Recipe for Scaling Reinforcement Learning on Advanced Reasoning Models
Chenxin An, Zhihui Xie, Xiaonan Li, Lei Li, Jun Zhang, Shansan Gong, Ming Zhong, Jingjing Xu, Xipeng Qiu, Mingxuan Wang, Lingpeng Kong
Polaris | SOTA RL training recipe for advanced reasoning models.
L-Eval: Instituting Standardized Evaluation for Long Context Language Models (ACL 2024)
Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu (Outstanding paper, ACL 2024)
L-Eval | A comprehensive evaluation suite for long-context language models with 20 sub-tasks and optimized evaluation metric.
Training-Free Long-Context Scaling of Large Language Models (ICML 2024)
Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong
ChunkLlama | A training-free method to extend Llama 2 70B to 100k context length (x48 times).
Why Does the Effective Context Length of LLMs Fall Short? (ICLR 2025)
Chenxin An, Jun Zhang, Ming Zhong, Lei Li, Shansan Gong, Yao Luo, Jingjing Xu, Lingpeng Kong
STRING | Interesting findings about the effective length growth of LLMs and new positional encodings STRING.
CoNT: Contrastive Neural Text Generation (NeurIPS 2022 Spotlight)
Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, Xuanjing Huang
CoNT |A contrastive learning training method for improving autoregressive text generation.
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective (ICLR 2024)
Ming Zhong, Chenxin An, Weizhu Chen, Jiawei Han, Pengcheng He
Scaling laws of rope-based extrapolation (ICLR 2024)
Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, Dahua Lin
Temporal Reasoning Transfer from Text to Video (ICLR 2025)
Lei Li, Yuanxin Liu, Linli Yao, Peiyuan Zhang, Chenxin An, Lean Wang, Xu Sun, Lingpeng Kong, Qi Liu
Scaling Diffusion Language Models via Adaptation from Autoregressive Models (ICLR 2025)
Shansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, Hao Peng, Lingpeng Kong
Honors and Awards
- Outstanding paper, ACL 2024
- Hong Kong PhD Fellowship Scheme (HKPFS), 2023
- National Scholarship at Fudan University, 2022
- National Scholarship at Fudan University, 2021