Figure: Traditional models for drug discovery treat proteins as single, monolithic structures. This "flat" perspective leads to: Over-smoothing: Key signals at the interfaces between different protein domains get diluted. Geometric Neglect: Models fail to recognize critical binding sites located in the "clefts" or gaps between domains.
The core breakthrough of our proposed DAGML is its modular approach, shifting from "single-structure" to "domain-aware" modeling. Its architecture consists of three pillars: 1) Hierarchical Representation: It combines protein language models (ESM-2) with geometric encoders (GearNet) and injects learnable "domain embeddings" to help the AI distinguish between different functional units of the protein. 2) Inter-domain Modeling: Instead of dense connections, it uses sparse message passing (focused on a 8Å distance threshold) to specifically model the interfaces where domains meet, while filtering out noisy, flexible "linker" regions. 3) Motif-Centric Encoding: For ligands, it focuses on pharmacophore motifs (functional groups) rather than just individual atoms, allowing for better "semantic matching" between the drug and the protein.
Reference: Zhang, S., Liu, J. K., Domain-Aware Geometric Multimodal Learning for Multi-Domain Protein-Ligand Affinity Prediction, 14th International Conference on Bioinformatics and Computational Biology (2026) [PDF][CODE] DOI: 10.48550/arXiv.2601.17102
Figure: This study systematically investigates the neuronal mechanisms of tumor-associated seizures (TAS), focusing on pathophysiological alterations in the peritumoral cortex (PTC). By analyzing cortical tissue from brain tumor patients undergoing resection, the authors demonstrate that pyramidal neurons in the PTC of patients with TAS exhibit significant differences in synaptic activity, dendritic spine density, and gene expression compared to seizure-free patients. Using an inducible glioma mouse model, the research tracks the progression of these neuronal changes during early tumor development, revealing that hyperexcitability emerges prior to seizure onset. Computational modeling indicates that human cortical neurons are highly sensitive to synaptic and dendritic perturbations, leading to paroxysmal depolarizing shifts (PDS) in affected networks. Longitudinal clinical analysis shows that PDS can be detected before clinical seizures in a subset of patients and serves as a reliable predictor of post-operative seizure occurrence. Additionally, the FDA-approved drug sulfasalazine is shown to partially reverse neuronal abnormalities in the mouse model. These findings identify key neuronal substrates of TAS and propose PDS as a potential biomarker for seizure risk, providing a foundation for targeted diagnostic and therapeutic strategies.
Reference: Bouwen, B., Bolleboom, A., Tang, Y., Yu, Z., van der Stap, A., van Rij, J., van Dis, V., Dirven, C., de Zeeuw, C., van Tellingen, O., Liu, J. K., Vincent, A., Gao, Z., Aberrant neural activity in the peritumoral cortex underlies the progression of tumor associated seizures, Nature Communications, 16, 10846 (2025). [PDF][[CODE] DOI:10.1038/s41467-025-66226-5
Figure: The WISA model enhances visual scene reconstruction from neural spikes by first applying a wavelet transform to extract time-frequency features, refining them with a temporal convolutional network, and then using an inverse wavelet transform to create augmented spike trains for a CNN-based decoder, significantly improving reconstruction fidelity over existing methods.
Reference: Peng, J., Jia, S, Zhang, J., Wang, Y., Yu, Z., Liu, J. K., Decoding Natural Visual Scenes via Learnable Representations of Neural Spiking Sequences, Neural Netw. 192, 107863 (2025) [PDF] [CODE] DOI: 10.1016/j.neunet.2025.107863
Figure: Artificial Intelligence models used to understand proteins are often "giants"—they are incredibly powerful but require massive, expensive computing power to customize for specific medical or scientific tasks. This study introduces SeqProFT, a clever way to tune these AI giants without the high cost. By using a technique called LoRA (Low-Rank Adaptation), researchers don't have to retrain the entire model. Instead, they add a tiny, efficient "adapter" that learns new tasks. The result? A much smaller model, running on standard hardware, can now perform as well as—or even better than—models four times its size. Furthermore, the team found a way to help the AI "see" the 3D shape of a protein just by looking at its sequence, making predictions about protein stability and function more accurate and easier for scientists to trust.
Reference: Zhang, S., Liu, J. K., SeqProFT: Sequence-only Protein Property Prediction with LoRA Finetuning, IEEE Transactions on Artificial Intelligence (2025) [PDF][CODE] DOI: 10.1109/TAI.2025.3636109