Chao Huang
Assistant Professor
Department of Computer Science & Institute of Data Science
π§ chaohuang75@gmail.com π Google Scholar π Lab Github
π ε’ιε ¬δΌε· π£ Twitter π Linkedin π ε°ηΊ’δΉ¦
I am an Assistant Professor & Ph.D Advisor at the Department of Computer Science and Institute of Data Science, at HKU. I am the director of Data Intelligence Lab@HKU, with the focus on Large Language Models, LLM Agents, Graph Learning, Recommender Systems and AI for Smart Cities.
π¨βπ»β¨ Our Research Works are Open-Sourced β Explore Them on Our Lab GitHub Repository π
π₯ Recent LLM Research Work
π Research Achievements:
π 2024 World AI Conference (WAIC) "Bright Stars"
π 2024 Frontiers of Science Award from ICBS 2024
π World's Top 2% Scientists Published by Stanford University (2022, 2023, 2024)
π 1 βοΈ One WWW 2024 Most Influential Papers (Rank 1st / 405 Accepted Papers)
π 1 βοΈ SIGIR 2024 Most Influential Papers (Rank 3rd / 159 Accepted Papers)
π 1 βοΈ KDD 2024 Most Influential Papers (Rank 10th / 559 Accepted Papers)
π 3 βοΈ WWW 2023 Most Influential Papers (Rank 4th, 5th, 10th / 323 Accepted Papers)
π 1 βοΈ SIGIR 2023 Most Influential Papers (Rank 12th / 165 Accepted Papers)
π 2 βοΈ KDD 2023 Most Influential Papers (Rank 10th, 11th / 497 Accepted Papers)
π 2 βοΈ SIGIR 2022 Most Influential Papers (Rank 2nd and 3rd / 161 Accepted Papers)
π 1 βοΈ SIGIR 2021 Most Influential Papers (Rank 13th / 151 Accepted Papers)
π 1 βοΈ KDD 2019 Most Influential Papers (Rank 3rd / 174 Accepted Papers)
π WSDM 2024 Top-1 Most Cited Paper (Rank 1st / 112 Accepted Papers)
π WSDM 2023 Top-1 Most Cited Paper (Rank 1st / 123 Accepted Papers)
π WSDM 2022 Top-3 Most Cited Paper (Rank 3rd / 159 Accepted Papers)
π ACM MM 2024 Best Paper Hornarable Mention Award
π WWW 2023 Best Paper Nomination
π WSDM 2022 Best Paper Nomination
π WWW 2019 Best Paper Nomination
Research Work Highlights:
Google Scholar Citation 9700+, with 4000+ in 2024, h-index 51, i-10 index 103
(In Chronological Order, * Indicates Corresponding Author, + Indicates Supervised Student)
[WWW'2024] "RLMRec: Representation Learning with Large Language Models for Recommendation"
X. Ren+, W. Wei, L. Xia, L. Su, S. Cheng, J. Wang, D. Yin and C. Huang*
π (Top-1 Most Influential Paper: 1 / 405 Accepted Papers) π
π (Top-1 Most Cited Paper: 1 / 405 Accepted Papers) π
[paper] (~140 Citations π) [code] (~370 GitHub Stars π)
[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
J. Tang+, Y. Yang, W. Wei, L. Shi, L. Su, S. Cheng, D. Yin and C. Huang*
π (Top-3 Most Influential Paper: 3 / 159 Accepted Papers) π
π (Top-2 Most Cited Paper: 2 / 159 Accepted Papers) π
[paper] (~185 Citations π) [code] (~700 GitHub Stars π)
[KDD'2024] "UrbanGPT: Spatio-Temporal Large Language Models"
Z. Li+, L. Xia, J. Tang, Y. Xu, L. Shi, L. Xia, D. Yin and C. Huang*
π (Top-10 Most Influential Paper: 10/ 559 Accepted Papers) π
π (Top-3 Most Cited Paper at ADS Track: 3 / 148 Accepted Papers) π
[paper] (~60 Citations π) [code] (~330 GitHub Stars π)
[KDD'2024] "HiGPT: Heterogenous Graph Language Models"
J. Tang+, Y. Yang, W. Wei, L. Shi, L. Xia, D. Yin and C. Huang*
π (Top-7 Most Cited Paper at Research Track: 7 / 411 Accepted Papers) π
[paper] (~30 Citations π) [code] (~120 GitHub Stars π)
[MM'2024] "DiffMM: Multi-Modal Diffusion Model for Recommendation"
Y. Jiang+, L. Xia, W. Wei, D. Luo, K. Lin and C. Huang*
π (Best Paper Nomination & Honorable Mention Award: 10/1149) π
[WSDM'2024] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
W. Wei+, X. Ren, J. Tang, Q. Wang, L. Su, S. Cheng, J. Wang, D. Yin and C. Huang*
π (Top-1 Most Cited Paper: 1 / 112 Accepted Papers) π
[paper] (~195 Citations π) [code] (~430 GitHub Stars π)
[WWW'2023] "Automated Self-Supervised Learning for Recommendation"
L. Xia+, C. Huang*, C. Huang, K. Lin, T. Yu and B. Kao
π (Spotlight Paper & Best Paper Nomination: 16/365 Accepted Papers) π
π (Top-10 Most Influential Paper: 10/ 365 Accepted Papers) π
π (Top-11 Most Cited Paper: 11 / 365 Accepted Papers) π
[paper] (~90 Citations π) [code]
[WWW'2023] "Multi-Modal Self-Supervised Learning for Recommendation"
W. Wei+, C. Huang*, L. Xia and C. Zhang
π (Top-4 Most Influential Paper: 4/ 365 Accepted Papers) π
π (Top-5 Most Cited Paper: 5 / 365 Accepted Papers) π
[paper] (~140 Citations π) [code] (~185 GitHub Stars π)
[WWW'2023] "Debiased Contrastive Learning for Sequential Recommendation"
Y. Yang+, C. Huang*, L. Xia, C. Huang, D. Luo and K. Li
π (Top-4 Most Influential Paper: 5/ 365 Accepted Papers) π
π (Top-5 Most Cited Paper: 6 / 365 Accepted Papers) π
[paper] (~130 Citations π) [code]
[SIGIR'2023] "Disentangled Contrastive Collaborative Filtering"
X. Ren+, L. Xia, J. Zhao, D. Yin and C. Huang*
π (Top-12 Most Influential Paper: 12/ 365 Accepted Papers) π
π (Top-6 Most Cited Paper: 6 / 365 Accepted Papers) π
[paper] (~100 Citations π) [code]
[ICLR'2023] "LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation"
X. Cai+, C. Huang*, L. Xia and X. Ren
π (Selected as Spotlight Paper: 25%) π
[paper] (~300 Citations π) [code]
[WSDM'2023] "Heterogeneous Graph Contrastive Learning for Recommendation"
M. Chen+, C. Huang*, L. Xia, W. Wei, Y. Xu and R. Luo
π (Top-1 Most Cited Paper: 1 / 123 Accepted Papers) π
[paper] (~175 Citations π) [code]
[SIGIR'2022] "Hypergraph Contrastive Collaborative Filtering"
L. Xia+, C. Huang*, Y. Xu, J. Zhao, D. Yin and J. Huang
π (Top-3 Most Influential Paper: 3/ 161 Accepted Papers) π
π (Top-2 Most Cited Paper: 2 / 161 Accepted Papers) π
[paper] (~385 Citations π) [code]
[SIGIR'2022] "Knowledge Graph Contrastive Learning for Recommendation"
Y. Yang+, C. Huang*, L. Xia and C. Li
π (Top-3 Most Influential Paper: 2/ 161 Accepted Papers) π
π (Top-3 Most Cited Paper: 3 / 161 Accepted Papers) π
[paper] (~380 Citations π) [code]
[WSDM'2022] "Contrastive Meta Learning with Behavior Multiplicity for Recommendation"
W. Wei+, C. Huang*, L. Xia, Y. Xu, J. Zhao and D. Yin
π (Best Paper Nomination) π
π (Top-3 Most Cited Paper: 3 / 159 Accepted Papers) π
[paper] (~180 Citations π) [code]
[SIGIR'2021] "Graph Meta Network for Multi-Behavior Recommendation"
L. Xia+, Y. Xu, C. Huang*, P. Dai and L. Bo
π (Top-13 Most Influential Paper: 13/ 151 Accepted Papers) π
[paper] (~215 Citations π) [code]
[KDD'2019] "Heterogeneous Graph Neural Network"
C. Zhang, D. Song, C. Huang, A. Swami, N. Chawla
π (Top-3 Most Influential Paper: 3/ 174 Accepted Papers) π
[03/2025] - π£ Please check out our released Fully-Automated Scientific Discovery with LLM Agents: πAI-Researcherπ
π₯ Complete End-to-End Research Automation & Streamlined Scientific Innovation
π₯ πβπ‘βπ¬ββοΈ From Literature Review to Idea Generation, Algorithm Design
π₯ π§ͺβ πβπββοΈβπ From Algorithm Implementation, Validation, Refinement to Result Analysis, and Manuscript Creation
[02/2025] - π£ Please check out our released Fully-Automated & Zero-Code LLM Agent Framework: πAutoAgentπ
π₯ Top Open-Sourced Performer on the GAIA Benchmark with our Auto-Deep-Research
π₯ Natural Language to Effortlessly Build Ready-to-Use Agents and Workflows - No Coding Required
π₯ Equipped with a Native Self-Managing Vector Database & Integrates with A Wide Range of LLMs
[02/2025] - π£ Please check out our released Extremely Long-Context Video Understanding System: πVideoRAGπ
π₯ Efficient Extreme Long-Context Video Processing by Leveraging a Single NVIDIA RTX 3090 GPU (24G)
π₯ Structured Video Knowledge Indexing & Multi-Modal Retrieval for Comprehensive Responses
π₯ The New Established LongerVideos Benchmark Features over 160 Videos Totaling 134+ Hours
[01/2025] - π£ Please check out our released Small Language Models-Powered RAG System: πMiniRAGπ
π₯ Retrieval-Augmented Generation for Resource-Constrained Environments
π₯ Lightweight Topology-Enhanced Retrieval and Optimized for Small Language Models (SLMs)
π₯ 75% Reduction in Storage Requirements and Perfect for Edge Devices
[10/2024] - π£ Please check out our released Retrieval-Augmented Generation System: πLightRAGπ
π₯ Simple and Fast Retrieval-Augmented Generation (RAG) System
π₯ Comprehensive Information Retrieval with Complex Inter-dependencies
π₯ Efficient Information Retrieval with Dual-Level Retrieval Paradigm
π₯ Rapid Adaptability to Dynamic Data Changes
[08/2024] - π£ Please check out our released new Graph Foundation Model: πAnyGraphπ
[08/2024] - π£ Please check out our released new Spatio-Temporal Foundation Model: πOpenCityπ
[08/2024] - π£ Please check out our released new Recommender Language Models πEasyRecπ
π₯ Easy-to-Use Language Models with Zero-Shot Recommendation Capacity
π₯ Fast Adaptation to Evolving User Preferences
π₯ Seamless Integration with Existing Recommender Systems
[05/2024] - π£ Please check out our released LLM for Explainable Recommendation π[EMNLP'2024] XRecπ
π₯ XRec: An intelligent LLM that gives your recommender a voice to divine your preferences in Natural Language
π₯ Integrates Collaborative Filtering with LLMs to generate Comprehensive Explanations for Recommendations
[03/2024] - π£ Please check out our released Graph Foundation Model: π[EMNLP'2024] OpenGraphπ
π₯ OpenGraph Uncovers the Potential of the Graph Foundation Model
π₯ Zero-shot Graph Generalization Distilled from LLMs
[03/2024] - π£ Please check out our released Heterogenous Graph Language Models: π[KDD'2024] HiGPTπ
π₯ HiGPT One Model for Any Heterogeneous Graph
π₯ Cross-domain Zero-shot Heterogeneous Graph Learning
π₯ 1-shot Beat 60-shot with Graph In-Context Learning
[02/2024] - π£ Please check out our released Spatio-Temporal Large Language Models: π[KDD'2024] UrbanGPTπ
π₯ UrbanGPT empowers LLMs to comprehend the intricate inter-dependencies across time and space
π₯ We facilitate more comprehensive and accurate Spatio-Temporal Predictions under Data Scarcity
[02/2024] - π£ Please check out our released Large Language Models for Graph Structure Learning: πGraphEditπ
π₯ GraphEdit is a large language model to Effectively Denoises Noisy Connections
π₯ The proposed new framework identifies node-wise dependencies from a global perspective
[11/2023] - π£ Please check out our released Graph Large Language Model: π[SIGIR'2024] GraphGPTπ
π₯ GraphGPT framework aligns LLMs with Graph Structural Knowledge for graph learning
π₯ We integrate Text-Graph Grounding with Instruction Tuning to build Effective and Efficient LLM for Graphs
[11/2023] - π£ Please check out our released LLM-enhanced Recommender System: π[WSDM'2024] LLMRecπ
π₯ LLMRec: Simple yet Effective LLM-based Graph Augmentation strategies for recommendation
π₯ We enhance the understanding of user preference with the Incorporation of LLM-based Knowledge
[11/2023] - π£ Please check out our released Representation Learning Framework with LLMs: π[WWW'2024] RLMRecπ
π₯ RLMRec: Enhancing existing recommenders with Model-agnostic LLM-empowered Representation Learning
π₯ We integrate Representation Learning with LLMs to capture Intricate Semantic Aspects of User Behaviors