Demo + Code and Datasets Download (Please cite my related publications)
Our work has been cited in the International AI Safety Report 2025 (chaired by Turing-award winner Yoshua Bengio and written by top 100 AI experts) in which machine unlearning is a pioneering paradigm to remove sensitive information or harmful data from trained AI models.
Privacy-Preserving Explainable AI: A Survey. [Code] [Cite1] [Cite2]
LLM-empowered Image Editing. [Code]
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience. [Code] [Cite]
Higher-order knowledge-enhanced recommendation with heterogeneous hypergraph multi-attention. [Code] [Cite]
On-Device Diagnostic Recommendation with Heterogeneous Federated BlockNets. [Code] [Cite]
Towards Self-Adaptive LLM-based News Verification with Real-time Evidence [Demo]
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures. [Code] [Cite]
A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era. [Code] [Cite]
A comparative study of question answering over knowledge bases. [Code] [Cite]
Entity Alignment for Knowledge Graphs with Multi-order Convolutional Networks. [Code] [Cite]
A dual benchmarking study of facial forgery and facial forensics. [Code] [Cite]
FactCatch: Incremental Pay-as-You-Go Fact Checking with Minimal User Effort. [Code] [Cite]
Prototype Learning for Interpretable Respiratory Sound Analysis. [Code] [Cite]
Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis. [Code] [Cite]
Fast Yet Effective Speech Emotion Recognition with Self-distillation. [Code] [Cite]
Knowledge Transfer For On-Device Speech Emotion Recognition with Neural Structured Learning. [Code] [Cite]
Efficient Integration of Multi-Order Dynamics and Internal Dynamics in Stock Movement Prediction. [Code] [Cite]
Graph representation learning benchmark. [Code]
Lightweight Branching Self-Distillation: Be Your Own Teacher. [Code] [Cite]
Traffic Speed Prediction. [Code] [Cite2] (Best Paper Awards)