I work on bridging neuroscience-inspired AI (NeuroAI) with practical applications in health monitoring. My research explores bio-inspired neural network architectures designed for efficient edge computing, aiming to enable next-generation wearable technologies for continuous and accessible healthcare.

Below is a selection of recent and representative works.

A full list of my publications is available on Google Scholar


Recent Works:

Cortical circuits exhibit a striking architectural constraint, the systematic suppression of strong reciprocal connections, known as the “no-strong-loops” principle. While the anatomical basis of this principle has been documented across species, its computational consequences have remained elusive. In this work, we provide the first mechanistic account of how asymmetric reciprocity in cortical connectivity enhances computational performance.


In our previous work ((https://lnkd.in/e9yuAviE), we introduced Network Reciprocity Control (NRC)—algorithms that adjust link and strength reciprocity while preserving key structural features (e.g., density/degree for binary graphs; total weight for weighted graphs). Here, we apply NRC to systematically vary reciprocity and quantify its computational consequences across architectures, sizes, and sparsity regimes.


What we did:

- Used reservoir computing to isolate structural effects from learning; evaluated memory capacity and kernel rank across 64–256 nodes, ultra-sparse/sparse regimes, multiple topologies (random, small-world, hierarchical-modular, core–periphery, hierarchical-modular–core–periphery), and weight distributions.

- Validated trends on directed primate connectomes (macaque long-distance, macaque visual cortex, and marmoset).


Key findings:

- Increasing reciprocity (link or strength) consistently reduces memory capacity and representational diversity across all architectures and densities; the decline is steepest in ultra-sparse regimes.

- Spectral analyses show reciprocity raises the spectral radius, narrows the spectral gap, and reduces non-normality, collectively compressing dynamical range despite spectral normalization near the echo-state boundary.

- ultra-sparse, hierarchical modular networks excel—but only when reciprocity is constrained. Sparse hierarchical modular networks excel across all reciprocity values.

- Comparative advantages of network topologies shift with reciprocity, sparsity, and weight distribution


Why it matters:

- Neuroscience: offers a functional rationale for the evolutionary suppression of strong reciprocal motifs in the cortex.

- NeuroAI: proposes a design parameter for recurrent/neuromorphic systems: manage reciprocity to enhance stability, memory, and efficiency.


Paper: A computational perspective on the no-strong-loops principle in brain networks


Codes: https://github.com/m00rcheh/network-reciprocity-RC-memory-capacity-and-kernel-rank

We introduced efficient Network Reciprocity Control (NRC) algorithms for steering the degree of asymmetry and reciprocity in binary and weighted networks while preserving fundamental network properties. Our methods maintain edge density in binary networks and cumulative edge weight in weighted graphs. 

These algorithms enable systematic investigation of the relationship between directional asymmetry and network topology, with potential applications in computational and network sciences, social network analysis, artificial recurrent neural networks (RNNs) and other fields studying complex network systems where the directionality of connections is essential.


Paper: Controlling Reciprocity in Binary and Weighted Networks: A Novel Density-Conserving Approach

Codes: https://github.com/m00rcheh/NRC_binary_and_weighted_Network_Reciprocity_Control



Neural networks now generate text, images, and speech with billions of parameters, producing a need to know how each neural unit contributes to these high-dimensional outputs. Existing explainable-AI methods, such as SHAP, attribute importance to inputs, but cannot quantify the contributions of neural units across thousands of output pixels, tokens, or logits. Here we close that gap with Multiperturbation Shapley-value Analysis (MSA), a model-agnostic game-theoretic framework. By systematically lesioning combinations of units, MSA yields Shapley Modes, unit-wise contribution maps that share the exact dimensionality of the model's output. We apply MSA across scales, from multi-layer perceptrons to the 56-billion-parameter Mixtral-8x7B and Generative Adversarial Networks (GAN). The approach demonstrates how regularisation concentrates computation in a few hubs, exposes language-specific experts inside the LLM, and reveals an inverted pixel-generation hierarchy in GANs. Together, these results showcase MSA as a powerful approach for interpreting, editing, and compressing deep neural networks.


Paper: Who Does What in Deep Learning? Multidimensional Game-Theoretic Attribution of Function of Neural Units

Codes: https://github.com/ShreyDixit/MSA-XAI

In this study, we adopted an exploratory approach to investigate the connectivity profile of auditory–visual integration networks (AVIN) in children with ADHD and neurotypical controls using the ADHD-200 rs-fMRI dataset.

We expanded our exploration beyond network-based statistics (NBS) by extracting a diverse range of graph theoretical features. These features formed the basis for applying machine learning (ML) techniques to discern distinguishing patterns between the control group and children with ADHD. To address class imbalance and sample heterogeneity, we employed ensemble learning models, including balanced random forest (BRF), XGBoost, and EasyEnsemble classifier (EEC).

Our findings revealed significant differences in AVIN between ADHD individuals and neurotypical controls, enabling automated diagnosis with moderate accuracy (74.30%). 


Paper: Exploring potential ADHD biomarkers through advanced machine learning: an examination of audiovisual integration networks

Codes: https://github.com/zamanzadeh98/Differential-Patterns-of-Associations-within-Audiovisual-Integration-Networks-in-Children-with-ADHD