Research Interests
Deep Learning, Reinforcement Learning, Stochastic Approximation and Large Scale Markov Decision Processes
Deep Learning, Reinforcement Learning, Stochastic Approximation and Large Scale Markov Decision Processes
Chandrashekar Lakshminarayanan and Amit Vikram Singh, “Neural Path Features and Neural Path Kernel: Understanding the role of gates in deep learning”, NeurIPS, 2020
Chandrashekar Lakshminarayanan and Csaba Szepesva'ri, “Linear Stochastic Approximation: How far does constant step size and iterate averaging go?”, AISTATS, 2018
Chandrashekar Lakshminarayanan, Shalabh Bhatnagar and Csaba Szepesva'ri, “A Linearly Relaxed Approximate Linear Program for Markov Decision Processes”, IEEE Transactions on Automatic Control, 2018
Chandrashekar Lakshminarayanan and Shalabh Bhatnagar, “A Stability Criterion for Two Timescale Stochastic Approximation Schemes”, Automatica, 2017
Sandeep Kumar, Sindhu Padakandla, Chandrashekar Lakshminarayanan, Priyank Parihar, Kanchi Gopinath and Shalabh Bhanagar, “Scalable Performance Tuning of Hadoop MapReduce: A Noisy Gradient Approach”, IEEE CLOUD, 2017
Raj Kumar Maity, Chandrashekar Lakshminarayanan, Sindhu Padakanla, Shalabh Bhatnagar, “Shaping Proto-Value Functions Using Rewards”, ECAI, 2016
Chandrashekar Lakshminarayanan and Shalabh Bhatnagar, “A Generalized Reduced Linear Program for Markov Decision Processes”, Twenty-Ninth AAAI conference, 2015
Chandrashekar Lakshminarayanan and Shalabh Bhatnagar, “Approximate Dynamic Programming with (min, +) linear function approximation for Markov Decision Processes”, 53rd IEEE Annual Conference on Decision and Control (CDC), 2014
Chandrashekar Lakshminarayanan, Ayush Dubey, Shalabh Bhatnagar and Chithralekha Balamurugan, “A Markov Decision Process framework for predictable job completion times on crowdsourcing platforms”, Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2014