Large Language Models Fine-Tuning on Graphs
Examined the limitations of existing approaches on large language models fine-tuning on graphs.
Pioneered an innovative and efficient LLM-GNN end-to-end fine-tuning algorithm that not only effectively adapts LLMs to downstream graph learning tasks with limited labeled data but also exhibits strong scalability and efficiency.
Conceived comprehensive experiments to validate the superior prediction performance and efficiency.
Submitted a paper titled “Efficient Large Language Models Fine-Tuning on Graphs” as the first author at ICLR 2024.
Graph Neural Networks for Large Scale Recommendations
Explored the challenge of balancing the preservation of GNNs' strong expressive capabilities with the goal of achieving linear computational complexity.
Introduced a novel GNN-based model for large-scale recommendations, which only incorporates one
propagation layer while preserving the ability to capture long-range collaborative signals.
Enhanced the model with an efficient neighbor sampling strategy while mitigating approximation errors
through an improved variance reduction technique.
Conducted extensive experiments to demonstrate the robust effectiveness and efficiency.
Submitted a paper titled “Linear-Time Graph Neural Networks for Scalable Recommendations” as the co-first author at WWW 2024.
Large-Scale Machine Learning on Graphs
Investigated the dilemma between capturing long-range dependency and neighborhood explosion.
Proposed a novel approach to solve the neighborhood explosion problem and improve computation
efficiency by exploiting the redundancy.
Developed the highly memory efficient networks based on the implicit gradient.
Composed comprehensive experiments to demonstrate the strong performance and efficiency on large-
scale graph datasets.
Published a paper titled “LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation” as the first
author at ICML 2023.
Led a tutorial “Large-Scale Graph Neural Networks: The Past and New Frontiers” accepted by KDD 2023
and invited to give an oral presentation at the PhD consortium.
Trajectory-Aided Beam Configuration at the Mobile Terminal in LEO Satellite Networks
Analyzed various satellite trajectory prediction mathematical models and evaluated their performances.
Defined vehicle trajectory from maps and computed the relative position of the satellite.
Calculated the beamforming gain according to the beam misalignment caused by the position error.
Designed the beamformer based on the beam coherence time and SNR variation.
Acquired LOS/NLOS spectral efficiency performance using stochastic gradient descent based on Rician channel model.
Simulated NLOS DL/UL deterministic channel model using ray tracing and compared it with the stochastic channel model.
Theoretical Research of Dynamic Characteristic of Complex Electric Load Based on Signal Processing
Established accurate mathematical models and circuit models of the current transformer.
Proposed analytical method for transformer’s dynamic composite errors under the conditions of different
dynamic test signals and current transformer parameters.
Built different modules and Excel database using QT based on C++ and MATLAB to achieve visual
waveform display interface for real-time data storage, signal feature analysis and signal feature display.
Published an EI paper titled “The Dynamic Error Analysis of the Current Transformer” and wrote “Analysis
of the Parameters and the Error of the Current Transformer” as the first author.
Beamformer Design and Doppler Shift Analysis Based on Deep Learning Vehicle Motion Prediction in LEO Satellite Communications
Predicted vehicle motions including maneuvers using convolution neural networks and performed cluster
algorithms to improve the accuracy.
Developed the beam update strategy and computed doppler shift based on predicted vehicle motions.
Machine Learning Based Transmit Antenna Selection for Downlink MU-MIMO System
Converted optimization-based transmit antenna selection into data-based supervised learning classification.
Implemented multi-class classification algorithms then calculated and compared the complexity and
system performance.
Simulation of Molecular Diffusion Based on Deep Learning
Programmed Gillespie Algorithm as a function according to the mathematical model.
Computed the statistical characteristics of different reactants such as probability density function based on biological signals.
Extended the stochastic model from unicellular to multicellular organisms to predict reactions using
recurrent neural networks.
Hierarchical Trees Coding for Image Compression Based on Wavelet Transform
Provided an improved multi-level tree coding algorithm based on Wavelet Transformation, which fixed
current deficiencies of the existing algorithm.
Improved current SPIHT algorithm using MATLAB, which reduced the computational complexity and
speeded up the wavelet transform without compromising the image compression quality.