Jie M. Zhang (张洁)
Lecturer (Assistant Professor), King's College London, UK.
Email: jie.zhang@kcl.ac.uk
Links: CV Google Scholar
Office: Bush House, Strand Campus, 30 Aldwych, London, WC2B 4BG, UK
Lecturer (Assistant Professor), King's College London, UK.
Email: jie.zhang@kcl.ac.uk
Links: CV Google Scholar
Office: Bush House, Strand Campus, 30 Aldwych, London, WC2B 4BG, UK
My research focuses on bridging software engineering (SE) and artificial intelligence (AI) to enhance the trustworthiness of both domains. This involves two key directions:
AI for SE: Leveraging AI technologies to automate software engineering tasks, such as code generation, test case creation, program localisation, and automated program repair. Recent advancements in large language models (LLMs) have significantly improved capabilities in these areas, enabling more efficient and accurate software development processes.
SE for AI: Applying software engineering principles to AI systems by treating them as specialised software. This direction uses established software engineering techniques to automatically detect and fix issues within AI models.
I have been selected as the 2025 ACM Sigsoft early career researcher award winner “for pioneering contributions to software engineering for AI, significantly shaping and transforming the field of AI system testing”. The award is widely recognised as the most prestigious honour for early career researchers in software engineering community and is granted to just one recipient each year.
Before joining King's College London, I was a research fellow at University College London working with Professor Mark Harman and Professor Federica Sarro. I obtained my PhD degree in Computer Science from Peking University, China. My Phd advisors are Professor Lu Zhang and Professor Dan Hao.
26th Sep, our paper "Large Language Models Miss the Multi-agent Mark" (by Emanuele La Malfa, Gabriele La Malfa, Samuele Marro, Jie M. Zhang, Elizabeth Black, Michael Luck, Philip Torr, Michael J. Wooldridge) is accepted at NeurIPS 2025 position paper track.
18th Sep, "EffiBench-X: A Multi-Language Benchmark for Measuring Efficiency of LLM-Generated Code" is accepted at NeurIPS 2025 datasets and benchmarks track.
10th August, our paper “Stealthy Backdoor Attack for Code Models” (Zhou Yang, Bowen Xu, Jie M. Zhang, Hong Jin Kang, Jieke Shi, Junda He, David Lo) has got the IEEE TSE 2024 Best Paper Award!
20th June 2025, our paper "Measuring the Influence of Incorrect Code on Test Generation" by Dong Huang, Jie M. Zhang, Mark Harman, Mingzhe Du, and Heming Cui is accepted by ICSE 2026.
16th May 2025, two ACL main train papers accepted: LLM-Powered Test Case Generation for Detecting Bugs in Plausible Programs (by Kaibo Liu, Zhenpeng Chen, Yiyang Liu, Jie M. Zhang, Mark Harman, Yudong Han, Yun Ma, Yihong Dong, Ge Li, Gang Huang) and Personality-Guided Code Generation Using Large Language Models (Yaoqi Guo, Zhenpeng Chen, Jie Zhang, Yang Liu, Yun Ma).
1st May 2025, our FSE 2025 paper "Hallucination Detection in Large Language Models with Metamorphic Relations" (by Borui Yang, Md Afif Al Mamun, Jie M. Zhang, Gias Uddin) got the distinguished paper award at FSE 2025!
1st May 2025, our paper "SWIFTCODE: Enhancing Code Generation in Large Language Models through Efficiency-Aware Fine-tuning" (by Dong HUANG, Guangtao Zeng, Jianbo Dai, Meng Luo, Han Weng, Yuhao QING, Heming Cui, Zhijiang Guo, Jie M. Zhang) is accepted by ICML 2025.
25 Feb 2025, our paper "Bias Testing and Mitigation in LLM-based Code Generation" (by Dong Huang, Jie M. Zhang, Qingwen Bu, Xiaofei Xie, Junjie Chen, and Heming Cui) is accepted by TOSEM.
19 Jan 2025, our paper "Knowledge-Enhanced Program Repair for Data Science Code" (by Shuyin Ouyang, Jie M. Zhang, Zeyu Sun, Albert Merono Penuela) is accepted by ICSE 2025.
15 Jan 2025, our paper "Hallucination Detection in Large Language Models with Metamorphic Relations" (by Borui Yang, Md Afif Al Mamun, Jie M. Zhang, Gias Uddin) is accepted by FSE 2025.
2nd Nov 2024, our paper "Diversity Drives Fairness: Ensemble of Higher Order Mutants for Intersectional Fairness of Machine Learning Software" (by Zhenpeng Chen, Xinyue Li, Jie M. Zhang, Federica Sarro, Yang Liu) is accepted at ICSE 2025.
26th Sep 2024, our paper "EffiBench: Benchmarking the Efficiency of Automatically Generated Code" (by Dong HUANG, Yuhao QING, Weiyi Shang, Heming Cui, Jie M. Zhang) has been accepted by NeurIPS 2024 Datasets and Benchmarks track.
26th Sep 2024, our paper "SOAP: Enhancing Efficiency of Generated Code via Self-Optimization" (by Dong HUANG, Jianbo Dai, Han Weng, Puzhen Wu, Yuhao QING, Heming Cui, Zhijiang Guo, Jie M. Zhang) has been accepted by NeurIPS 2024 as a main track poster.
23rd August 2024, our paper "An Empirical Study of the Non-determinism of ChatGPT in Code Generation" (by Shuyin Ouyang, Jie M. Zhang, Mark Harman, Meng Wang) has been accepted by ACM TOSEM.
15th April 2024, our paper "MirrorFair: Fixing Fairness Bugs in Machine Learning Software via Counterfactual Predictions" (by Ying Xiao, Jie M. Zhang, Yepang Liu, Mohammad Reza Mousavi, Sicen Liu, Dingyuan Xue) has been accepted by FSE 2024.
15th April 2024, our paper "Fairness Testing of Machine Translation Systems" (by Zeyu Sun, Zhenpeng Chen, Jie M. Zhang, Dan Hao) has been accepted by TOSEM.
28th March 2024, I got the Royal Society International Exchange Grant to support my research collaboration with Prof. Yepang Liu from SUSTech China.
23rd Jan 2024, our paper titled "Stealthy Backdoor Attack for Code Models" (by Zhou Yang, Bowen Xu, Jie M. Zhang, Jin Hong, Jieke Shi, Junda He, David Lo) has been accepted by IEEE Transactions on Software Engineering.
Dec 18th, our paper on machine learning testing got the 2022 Best Paper Award from IEEE Transactions on Software Engineering.
Nov 14th, I got the NMES Enterprise & Engagement Partnerships Fund from King's. I will use the money to organise a one-day event to tackle the gap between industry and academia in LLMs + SE in early 2024.
Oct 25th, our paper "Search-based Automatic Repair for Fairness and Accuracy in Decision-making Software" (by Max Hort, Jie M. Zhang, Federicaf Sarror, Mark Harman) is accepted by Empirical Software Engineering.
Oct 15th, our paper "Fairness Improvement with Multiple Protected Attributes: How Far Are We?" (by Zhenpeng Chen, Jie M. Zhang, Federica Sarro, Mark Harman) is accepted by ICSE 2024.
August 17th, our paper "Dark-Skin Individuals Are at More Risk on the Street: Unmasking Fairness Issues of Autonomous Driving Systems" is reported by New Scientist [link]!
August 15th 2023, our paper on "Mutation Analysis for Evaluating Code Translation" (Giovani Guizzo, Jie M. Zhang, Federica Sarro, Christoph Treude, Mark Harman) is accepted by Empirical Software Engineering!
July 15th 2023, I am on the selection committee for ASE 2023 Most Influential Paper award.
April 18th 2023. I am selected as a new member of the steering committee for ICST. Looking forward to having more opportunities to serve the community!
April 4th 2023. I gave a talk at Huawei on "Automatica Assessing and Improving the Trustworthiness of Code".
March 8th 2023. I am selected as one of the top-fifteen 2023 Chinese Female Young Scholars in AI+X (全球华人女性青年学者榜). The selection is based on academic excellence, academic influence, and academic potential index with big data mining technology and automated evaluation methods.
March 3rd 2023. I am invited as a panelist for seminar "CHATGPT: A CLOSER LOOK AT LARGE LANGUAGE MODELS" organised by CBAIA (Chinese-British Artificial Intelligence Association).
Feb 28th 2023. "Model Validation Using Mutated Training Labels: An Exploratory Study" (Jie M. Zhang, Mark Harman, Benjamin Guedj, Earl Barr, John Shawe-Taylor) is accepted by Neurocomputing.
Feb 14th 2023. I gave a keynote talk on machine translation trustworthiness at PracticalDL in AAAI 2023.
Jan 16th 2023. "Who Judges the Judge: An Empirical Study on Online Judge Tests" (Kaibo Liu, Yudong Han, Jie Zhang, Zhenpeng Chen, Federica Sarro, Mark Harman, Gang Huang, Yun Ma) is accepted by ISSTA 2023.
Jan 11th 2023. "A Comprehensive Empirical Study of Bias Mitigation Methods for Machine Learning Classifiers" (Zhenpeng Chen, Jie M. Zhang, Federica Sarro, Mark Harman) is accepted by ACM TOSEM.
Dec 8th 2022. "Vulnerability Detection with Graph Simplification and Enhanced Graph Representation Learning" (Xin-Cheng Wen, Yupan Chen, Cuiyun Gao, Hongyu Zhang, Jie M. Zhang, Qing Liao) is accepted by ICSE 2023.
Oct 26th 2022. I gave a talk on ML trustworthiness at Royal Holloway.
July 26th 2022. I am invited to give a talk on ML trustworthiness at Data61, CSIRO.
July 20th 2022. Our paper "Natural Test Generation for Precise Testing of Question Answering Software" (by Qingchao Shen, Junjie Chen, Jie M. Zhang, Haoyu Wang, Shuang Liu, Menghan Tian) is accepted by ASE 2022!
June 28th 2022. I am invited to give a talk on Trustworthy Machine Translation at Fudan University.
June 24th 2022. I am invited to give a talk on testing machine learning systems at the University of Bristol.
June 14th 2022. Our paper "MAAT: A Novel Ensemble Approach to Fixing Fairness and Performance Bugs for Machine Learning Software" (by Zhenpeng Chen, Jie M. Zhang, Federica Sarro, Mark Harman) is accepted by FSE 2022!
June 13th 2022. I am invited as a keynote speaker at RACSAD 2022 on Machine Learning Testing.
June 1st 2022. I am excited to start my new journey at King's College London as a lecturer (US Assistant Professor) of Software Engineering. My KCL webpage is here.
March 3rd 2022. "Using Lexical and Semantic Program Features to form Generic Code Representation" (by Wei Ma, Mengjie Zhao, Ezekiel Soremekun, Qiang Hu, Jie Zhang, Mike Papadakis, Maxime Cordy, Xiaofei Xie, Yves Le Traon) is accepted by MSR 2022.
Jan 28th 2022. Our paper "Leveraging Automated Unit Tests for Unsupervised Code Translation" has been selected as a spotlight paper (acceptance rate 5%) in ICLR 2022!
Jan 20th 2022. "Leveraging Automated Unit Tests for Unsupervised Code Translation" (by Baptiste Roziere, Jie M. Zhang, Francois Charton, Mark Harman, Gabriel Synnaeve, Guillaume Lample) is accepted by ICLR 2022.
Research Fellow | March, 2019 – May, 2022 | CREST, UCL, UK | Supervisor : Prof. Mark Harman and Prof. Federica Sarro
Research Associate | February, 2018 – December, 2018 | CREST, UCL, UK | Supervisor : Prof. Mark Harman and Prof. Earl Barr
PHD student | Sep, 2015 – June, 2018 | GOSTA, Peking University, China | Supervisor : Prof. Lu Zhang and Prof. Dan Hao
Intern | June,2017-December, 2017 | Software Analysis group, Microsoft Research Asia, China | Mentor: Shi Han
Visiting student | Sep, 2016–Dec, 2016 | CREST, University College London, UK | Supervisor : Mark Harman
2018 Outstanding PHD Graduate Award, Peking University
2017 Top-ten Research Excellence Award, EECS, Peking University
2016 Fellowship at Microsoft Research Asia
2016 Lee Wai Wing Scholarship at Peking University
2015 National Scholarship
2015 Award for Scientific Research
2014 Learning Scholarship at Peking University
2014 Award for Scientific Research
2014 Innovation Award at Peking University
2013 Learning Excellence Award at Peking University