Shichang (Ray) Zhang

alt text 

I am on the academic job market for the 2025 to 2026 cycle. If you believe I am a good fit for a position, please don't hesitate to reach out.
[CV] [Research Statement] [Teaching Statement] [Talk Sample 1 (need to login to NeurIPS)][Talk Sample 2]

Contact

Science and Engineering Complex (SEC) 6.220, 150 Western Ave, Boston, MA 02134
E-mail: shzhang AT hbs DOT edu

[Google Scholar] [GitHub] [LinkedIn] [X]

About Me

I am Shichang (Ray) Zhang. I am a postdoctoral fellow at the D^3 Institute at Harvard University working with Professor Hima Lakkaraju. I received my Ph.D. in Computer Science from University of California, Los Angeles (UCLA) advised by Professor Yizhou Sun. My Ph.D. research was generously supported by the J.P. Morgan Chase AI Ph.D. Fellowship and the Amazon Fellowship. Before UCLA, I received my M.S. and B.A., both in Statistics, from Stanford and Berkeley, respectively.

My research aims to scientifically understand AI to ensure it is trustworthy and beneficial to humanity. I have developed principled methods to analyze and improve the trustworthiness of AI systems across the full spectrum, from model mechanisms to training processes to data features. (1) Model-wise, I study large language models (LLMs) to reveal their internal mechanisms and reasoning capabilities, enabling task-specific interpretable models built on them. (2) Training-wise, I develop techniques to measure the training influence on AI behavior, providing new tools for training data assessment, model auditing, and credit assignment to developers. (3) Data-wise, I design methods to examine how data features drive AI decisions, allowing non-expert users to interpret and effectively use AI in healthcare, science, and e-commerce applications. Collectively, my research moves beyond black-box empiricism toward a holistic scientific understanding for trustworthy AI

Selected Publications and Preprints

  1. How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence
    Hongzhe Du*, Weikai Li*, Min Cai, Karim Saraipour, Zimin Zhang, Himabindu Lakkaraju, Yizhou Sun, Shichang Zhang (*equal contribution)
    COLM 2025 (NENLP Outstanding Paper) [PDF] [Code] [slides]

  2. Towards Unified Attribution in Explainable AI, Data-Centric AI, and Mechanistic Interpretability
    Shichang Zhang, Tessa Han, Usha Bhalla, Himabindu Lakkaraju
    Preprint, Under Review [PDF]

  3. Automated Molecular Concept Generation and Labeling with Large Language Models
    Zimin Zhang*, Qianli Wu*, Botao Xia*, Fang Sun, Ziniu Hu, Yizhou Sun, Shichang Zhang (*equal contribution)
    COLING 2025 [PDF] [Code]

  4. An Explainable AI Approach using Graph Learning to Predict ICU Length of Stay
    Tianjian Guo, Indranil Bardhan, Ying Ding, Shichang Zhang
    ISR Oct. 2024 [PDF (official)] [PDF (preprint)]

  5. Predicting and Interpreting Energy Barriers of Metallic Glasses with Graph Neural Networks
    Haoyu Li*, Shichang Zhang*, Longwen Tang, Mathieu Bauchy, Yizhou Sun (*equal contribution)
    ICML 2024 [PDF] [Code]

  6. PaGE-Link: Graph Neural Network Explanation for Heterogeneous Link Prediction
    Shichang Zhang, Jiani Zhang, Xiang Song, Soji Adeshina, Da Zheng, Christos Faloutsos, Yizhou Sun
    WWW 2023 [PDF] [Code]

  7. GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
    Shichang Zhang, Neil Shah, Yozen Liu, Yizhou Sun
    NeurIPS 2022 [PDF] [Code]

  8. Graph-less Neural Networks, Teach Old MLPs New Tricks via Distillation
    Shichang Zhang, Yozen Liu, Yizhou Sun, Neil Shah
    ICLR 2022 [PDF] [Code]

Full list of publications

Honors and Awards