We developed the FUSE algorithm for fast node embedding generation for large graphs in absence of Given node features.
Graph-based learning is a cornerstone for analyzing structured data. However, in many real-world graphs, nodes lack informative feature vectors, leaving only neighborhood connectivity as available signals. In such cases, downstream tasks like node classification, contrastive learning, link prediction, node clustering etc. hinges on learning node embeddings that capture structural roles and topological context. Noting that many of the current node embedding generation algorithms are slow and can be ineffective in a dynamic framework, our focus remains on developing fast node embedding learning algorithms which can be used for diverse downstream tasks. We are especially interested in applications where fast embedding learning is essential, e.g. dynamically or context-perturbed graphs.
We developed the MSDE algorithm that can enhace density distribution of data. It can have applications in unsupervised anomaly detection, feature selection, denoising etc.
We study density enhancement as a structure-aware data transformation paradigm that reveals latent geometric organization in high-dimensional data. Building on density-driven dynamics such as Mean-Shift Density Enhancement (MSDE), data points are iteratively shifted toward regions of higher local density, tracing trajectories that encode neighborhood structure, boundary regions, and manifold-level organization. Beyond anomaly detection, density enhancement yields noise-suppressed and geometry-aware representations that can support downstream tasks including clustering, classification, feature relevance analysis, and graph-based learning without altering the ambient feature space. This enables principled data enhancement that improves learning reliability while remaining compatible with existing models and pipelines.
We developed SADP learning paradigm for fast local learning in SNNs
Spike-Timing-Dependent Plasticity (STDP) provides a biologically grounded learning rule for spiking neural networks (SNNs), but its reliance on precise spike timing and pairwise updates limits fast learning of weights. We introduce Spike Agreement-Dependent Plasticity (SADP), which replaces pairwise spike-timing comparisons with population-level agreement metrics such as Cohen’s Kappa. The proposed learning rule preserves strict synaptic locality, admits linear-time complexity, and enables efficient learning without backpropagation, surrogate gradients. We are now exploring development of diverse neural network architectures assisted by the SADP learning paradigm. The main goal of this project is to develop fast learning algorithms for low power computation.
We developed the ConvGeN algorithm for data enrichment in the context of imbalanced classification.
One of our key research directions is context-aware synthetic tabular data generation. We view synthetic data not as a single generic artifact, but as a resource whose notion of quality depends strongly on its intended use. Applications such as classification, privacy-preserving data sharing, patient stratification, or exploratory analysis impose different requirements on what aspects of the data distribution should be preserved. Consequently, effective synthetic data generation must adapt to the downstream context rather than aiming for uniform realism across all settings. Our work focuses on developing generative strategies that explicitly incorporate contextual information such as task objectives, dependency structure among features, and domain-specific constraints into the data generation process.
In this project, we maintain a close association with the group of Prof. Olaf Wolkenhauer, at the University of Rostock, Germany.