Research Interests

Neurochaos Learning (NL): When Chaos & Noise Meet Machine Learning (with Harikrishnan NB)

Chaos and Noise are ubiquitous in the Brain. Inspired by the chaotic firing of neurons and the constructive role of noise in neuronal models, we for the first time connect chaos, noise and learning. In this paper, we demonstrate Stochastic Resonance (SR) phenomenon in Neurochaos Learning (NL). SR manifests at the level of a single neuron of NL and enables efficient subthreshold signal detection. Furthermore, SR is shown to occur in single and multiple neuronal NL architecture for classification tasks — both on simulated and real-world spoken digit datasets, and in architectures with chaotic maps as well as Hindmarsh–Rose spiking neurons. Intermediate levels of noise in neurochaos learning enable peak performance in classification tasks thus highlighting the role of SR in AI applications, especially in brain inspired learning architectures.

For further details, see: arXiv:2102.01316 [q-bio.NC]

Published in Neural Networks: https://doi.org/10.1016/j.neunet.2021.06.025


ChaosNet: A Chaos based Artificial Neural Network Architecture For Classification (with Harikrishnan NB, Aditi Kathpalia and Snehanshu Saha)

Inspired by chaotic firing of neurons in the brain, we propose ChaosNet—a novel chaos based artificial neural network architecture for classification tasks. ChaosNet is built using layers of neurons, each of which is a 1D chaotic map known as the Generalized Luröth Series (GLS) that has been shown in earlier works to possess very useful properties for compression, cryptography, and for computing XOR and other logical operations. In this work, we design a novel learning algorithm on ChaosNet that exploits the topological transitivity property of the chaotic GLS neurons. The proposed learning algorithm gives consistently good performance accuracy in a number of classification tasks on well known publicly available datasets with very limited training samples. Even with as low as seven (or fewer) training samples/class (which accounts for less than 0.05% of the total available data), ChaosNet yields performance accuracies in the range of 73.89%−98.33%. We demonstrate the robustness of ChaosNet to additive parameter noise and also provide an example implementation of a two layer ChaosNet for enhancing classification accuracy. We envisage the development of several other novel learning algorithms on ChaosNet in the near future.

For further details, see: arXiv:1910.02423v1 [cs.LG], arXiv:1905.12601 [q-bio.NC]

Published in Chaos (AIP): https://aip.scitation.org/doi/10.1063/1.5120831

Causality Testing (with Aditi Kathpalia)

Causality testing, the act of determining cause and effect from measurements, is widely used in physics, climatology, neuroscience, econometrics and other disciplines. As a result, a large number of causality testing methods based on various principles have been developed. Causal relationships in complex systems are typically accompanied by entropic exchanges which are encoded in patterns of dynamical measurements. A data compression algorithm which can extract these encoded patterns could be used for inferring these relations. This motivates us to propose, for the first time, a generic causality testing framework based on data compression. The framework unifies existing causality testing methods and enables us to innovate a novel Compression-Complexity Causality measure. This measure is rigorously tested on simulated and real-world time series and is found to overcome the limitations of Granger Causality and Transfer Entropy, especially for noisy and non-synchronous measurements. Additionally, it gives insight on the `kind' of causal influence between input time series by the notions of positive and negative causality.

For further details, see: https://arxiv.org/abs/1710.04538

Published in PeerJ-CS: https://peerj.com/articles/cs-196/

Information, Complexity and Consciousness (with Mohit Virmani)

Quantifying integrated information is a leading approach towards building a fundamental theory of consciousness. Integrated Information Theory (IIT) has gained attention in this regard due to its theoretically strong framework. However, it faces some limitations such as current state dependence, computationally expensive and inability to be applied to real brain data. On the other hand, Perturbational Complexity Index (PCI) is a clinical measure for distinguishing different levels of consciousness. Though PCI claims to capture the functional differentiation and integration in brain networks (similar to IIT), its link to integrated information theories is rather weak. Inspired by these two approaches, we propose a new measure - Φ^C using a novel compression-complexity perspective that serves as a bridge between the two, for the first time. Φ^C is founded on the principles of lossless data compression based complexity measures which characterize the dynamical complexity of brain networks. Φ^C exhibits following salient innovations: (i) mathematically well bounded, (ii) negligible current state dependence unlike Φ, (iii) integrated information measured as compression-complexity rather than as an infotheoretic quantity, and (iv) faster to compute since number of atomic bipartitions scales linearly with the number of nodes of the network, thus avoiding combinatorial explosion. Our computer simulations show that Φ^C has similar hierarchy to <Φ> for several multiple-node networks and it demonstrates a rich interplay between differentiation, integration and entropy of the nodes of a network. Φ^C is a promising heuristic measure to characterize the quantity of integrated information (and hence a measure of quantity of consciousness) in larger networks like human brain and provides an opportunity to test the predictions of brain complexity on real neural data.

For further details, see: https://arxiv.org/abs/1608.08450

Published in Heliyon: https://www.sciencedirect.com/science/article/pii/S2405844017338318

Neural Signal Multiplexing (with K R Sahasranand)

Transport of neural signals in the brain is challenging owing to neural interference and neural noise. There is experimental evidence of multiplexing of sensory information across population of neurons, particularly in the vertebrate visual and olfactory systems. Recently, it has been discovered that in lateral intraparietal cortex of the brain, decision signals are multiplexed with decision-irrelevant visual signals. Furthermore, it is well known that several cortical neurons exhibit chaotic spiking patterns. Multiplexing of chaotic neural signals and their successful demultiplexing in the neurons amidst interference and noise, is difficult to explain. In this work, a novel compressed sensing model for efficient multiplexing of chaotic neural signals constructed using the Hindmarsh-Rose spiking model is proposed. The signals are multiplexed from a pre-synaptic neuron to its neighbouring post-synaptic neuron, in the presence of 10,000 interfering noisy neural signals and demultiplexed using compressed sensing techniques.

For further details, see: https://arxiv.org/abs/1601.03214

Perspectives on Complexity (with Karthi Balasubramanian)

There is no single universally accepted definition of "Complexity". There are several perspectives on complexity and what constitutes complex behaviour or complex systems, as opposed to regular, predictable behaviour and simple systems. We explore the following perspectives on complexity: "effort-to-describe" (Shannon entropy H, Lempel-Ziv complexity LZ), "effort-to-compress" (ETC complexity) and "degree-of-order" (Subsymmetry or SubSym). While Shannon entropy and LZ are very popular and widely used, ETC is a recently proposed measure for time series. We also propose a novel normalized measure SubSym based on the existing idea of counting the number of subsymmetries or palindromes within a sequence. We compare the performance of these complexity measures on the following tasks: a) characterizing complexity of short binary sequences of lengths 4 to 16, b) distinguishing periodic and chaotic time series from 1D logistic map and 2D H\'{e}non map, and c) distinguishing between tonic and irregular spiking patterns generated from the "Adaptive exponential integrate-and-fire" neuron model. Our study reveals that each perspective has its own advantages and uniqueness while also having an overlap with each other.

For further details, see: https://arxiv.org/abs/1611.00607

Aging and Cardiovascular Complexity (with Karthi Balasubramanian)

As we age, our hearts undergo changes that result in a reduction in complexity of physiological interactions between different control mechanisms. This results in a potential risk of cardiovascular diseases which are the number one cause of death globally. Since cardiac signals are nonstationary and nonlinear in nature, complexity measures are better suited to handle such data. In this study, three complexity measures are used, namely Lempel–Ziv complexity (LZ), Sample Entropy (SampEn) and Effort-To-Compress (ETC). We determined the minimum length of RR tachogram required for characterizing complexity of healthy young and healthy old hearts. All the three measures indicated significantly lower complexity values for older subjects than younger ones. However, the minimum length of heart-beat interval data needed differs for the three measures, with LZ and ETC needing as low as 10 samples, whereas SampEn requires at least 80 samples. Our study indicates that complexity measures such as LZ and ETC are good candidates for the analysis of cardiovascular dynamics since they are able to work with very short RR tachograms.

For further details, see: https://peerj.com/articles/2755/

I am very grateful to my guides:

Prabhakar G. Vaidya (Ph.d. Advisor, 2005 - 2008)

William A. Pearlman (M.S. Advisor, 1999 - 2001)

back to home