Sayak Mukherjee

Hi, welcome to my webpage!

I am a staff scientist in the Optimization and Control Group at Pacific Northwest National Laboratory (PNNL), Richland, WA. I did my Ph.D. in the Electrical and Computer Engineering Dept., North Carolina State University at Raleigh, and worked as a Post-Doctorate Research Associate at PNNL. I was a visiting scholar at LIDS, MIT in the summer of 2022. During my Ph.D. I worked as an R&D intern at New York Power Authority. 

Updates:

Research Interests


Theoretical & Computational tools: Optimal and Robust Control, Data-driven Optimal Control using Reinforcement Learning/ Adaptive Dynamic Programming, Distributed Control, Machine Learning using Graph Neural Nets, Bayesian Inference, Dynamic System interpretation of Neural Nets, etc.

Applications: Energy Systems, Large-scale Power Grid Dynamics and Control, Analysis and Control of Wind-integrated Power Systems, Distributed Energy Resources, Grid-forming Inverters and Power Electronics-dominated Grids.

Education


Aug., 2015 - May, 2020, Ph.D. in Electrical Engineering, North Carolina State University, Raleigh, NC, USA. Dissertation: Data-Driven Reinforcement Learning Control using Model Reduction Techniques: Theory and Applications to Power Systems. FREEDM Systems Center, Electrical and Computer Engineering, NC State, Advisor : Dr. Aranya Chakrabortty, GPA : 4.0/4.0, Major GPA : 4.277.

2011-2015, B.E., Electrical Engineering, Dept. of Electrical Engineering, Jadavpur University, Kolkata, India. First Class Honours. CGPA : 9.38/10 (Highest CGPA in EE), Percentage : 87.31. 


Publications

Google Scholar page 

Journals: 

J8.  S. Mukherjee, T.L. Vu, "Reinforcement Learning of Structured Control for Linear Systems with Unknown State Matrix", accepted in IEEE Transactions on Automatic Control, 2022.

J7. S. Mukherjee, H. Bai, A. Chakrabortty, “Model-based and Model-free Designs for an Extended Continuous-time LQR with Exogenous Inputs”, Systems and Control Letters, Elsevier, 2021.

J6. S. Mukherjee, H. Bai, A. Chakrabortty, “Reduced-Dimensional Reinforcement Learning Control using Singular Perturbation Approximations”, Automatica, 2021.

J5. S. Mukherjee, R. Huang, Q. Huang, T.L. Vu, T. Yin, “Scalable Voltage Control using Structure-Driven Hierarchical Deep Reinforcement Learning”, submitted, arXiv preprint arXiv:2102.00077, 2021.

J4. S. Mukherjee, T.L. Vu, “On Distributed Model-Free Reinforcement Learning Control with Stability Guarantee”, IEEE Control Systems Letters, 2020.

J3. S. Mukherjee, A. Chakrabortty, H. Bai, A. Darvishi, B. Fardanesh, “Scalable Designs for Reinforcement Learning-based Wide-Area Control”, IEEE Transactions on Smart Grid, 2020.

J2. S. Mukherjee, A. Chakrabortty, S. Babaei, “Modeling and Quantifying the Impact of Wind Power Penetration on Slow Coherency of Power Systems”, IEEE Trans. on Power Systems, 2020.

J1. S. Mukherjee, S. Babaei, A. Chakrabortty, B. Fardanesh “Designing a Measurement-driven Optimal Controller for an Utility-Scale Power System: A New York State Grid Perspective”, International Journal of Power and Energy Systems, Elsevier, 2020.


Conferences:

C17. Mukherjee, S., Drgoňa, J., Tuor, A., Halappanavar, M. and Vrabie, D., 2022. Neural Lyapunov Differentiable Predictive Control. arXiv preprint arXiv:2205.10728.

C16. Drgoňa, J., Mukherjee, S., Tuor, A., Halappanavar, M. and Vrabie, D., 2022. Learning Stochastic Parametric Differentiable Predictive Control Policies. accepted at IFAC ROCOND, arXiv preprint arXiv:2203.01447.

C15. T.L. Vu, S. Mukherjee, R. Huang and Q. Huang, 2021. Safe Reinforcement Learning for Grid Voltage Control, Workshop on Safe and Robust Control of Uncertain Systems at the 35th Conference on Neural Information Processing Systems (NeurIPS) 2021.

C14. J. Drgona, S. Mukherjee, J. Zhang, M. Halappanavar, F. Liu, On the Stochastic Stability of Deep Markov Models, 35th Conference on Neural Information Processing Systems (NeurIPS) 2021, Sydney, Australia.

C13. Jiaxin Zhang, Jan Drgona, Sayak Mukherjee, Mahantesh Halappanavar, Frank Liu, “Variational Generative Flows for Reconstruction Uncertainty Estimation”,  ICML 2021 Workshop on Uncertainty and Robustness in Deep Learning.

C12. T.L. Vu, S. Mukherjee, R. Huang, J. Tan, Q. Huang,“Barrier Function-based Safe Reinforcement Learning for Emergency Control of Power Systems”, Conference on Decision and Control, 2021.

C11. S. Mukherjee, T.L. Vu, “On Distributed Model-Free Reinforcement Learning Control with Stability Guarantee”, American Control Conference (L-CSS presentation), 2021.

C10. T.L. Vu, S. Mukherjee, T. Yin, R. Huang, J. Tan, Q. Huang,“Safe Reinforcement Learning for Emergency Load Shedding of Power Systems”, IEEE PES General Meeting, 2021.

C9. S. Mukherjee, V. Adetola, “A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown Dynamic Systems under Attacks”, IEEE CCTA, 2021.

C8. S. Mukherjee, H. Bai, and A. Chakrabortty, “Reinforcement Learning Control of Power Systems with Unknown Network Model under Ambient and Forced Oscillations”, invited paper in IEEE Conference on Control Technology and Applications (CCTA), Montreal, Canada, 2020.

C7. S. Mukherjee, H. Bai, A. Chakrabortty, “On Robust Reduced-Dimensional Reinforcement Learning Control for Singularly Perturbed Systems’ , American Control Conference, Denver, CO, 2020.

C6. S. Mukherjee, H. Bai, A. Chakrabortty, “Model-free Decentralized Reinforcement Learning Control for Distributed Energy Resources”, IEEE PES General Meeting, 2020.

C5. S. Mukherjee, H. Bai, A. Chakrabortty, “Block-Decentralized Model Free reinforcement Learning of Two-time Scale Networks”, American Control Conference, 2019.

C4. S. Mukherjee, A. Darvishi, A. Chakrabortty, B. Fardanesh, “Learning Power System Dynamic Signatures using LSTM-Based Deep Neural Network: A Prototype Study on the New York State Grid”, IEEE PES General Meeting, Atlanta, GA, 2019.

C3. S. Mukherjee, H. Bai, A. Chakrabortty, “On Model-Free Reinforcement Learning for Singularly Perturbed Systems”, IEEE Conference on Decision and Control, Miami, Florida, 2018.

C2. S. Mukherjee, N. Xue, and A. Chakrabortty, “A Hierarchical Design for Damping Control of Wind-Integrated Power Systems Considering Heterogeneous Wind Farm Dynamics”, IEEE Conference on Control Technology and Applications, Denmark, 2018.

C1. S. Mukherjee, S. Babaei, and A. Chakrabortty, “A Measurement-based Approach for Optimal Damping Control of the New York State Power Grid”, IEEE PES General Meeting, Portland, OR, 2018.


Awards and Recognitions


 Guest Lectures and Talks


T8. SIAM Annual Meeting 21 presentation by Jan, J. Drgona, S. Mukherjee, J. Zhang, M. Halappanavar, F. Liu, "On Stability of Deep Neural Network-Based Models", 2021.

T7. Pacific Northwest National Laboratory TechFest21 talk, "Making Learning based Controls Scalable by using Limited Structural Information", 2021.

T6. Featured lightning talk at Duke University Energy Data Analytics Symposium on “Scalable Reinforcement Learning-based Control of Distributed Energy Resources”, 2020.

T5. FREEDM, NC State technical tutorial on RL Control for Power Systems, 2020.

T4. Presentation at LIDS, MIT on “Reinforcement Learning Control using Dimensionality Reduction and Applications to Power System Dynamics”, 2020.

T3. Guest lectures on "Adaptive Optimal Control via Reinforcement Learning" for the NCSU course ECE 792: Adaptive Control and Reinforcement Learning.

 T2. Invited talk on "Reinforcement Learning Based Wide-Area Control of Power Systems Using Dimensionality Reduction Techniques" on behalf of Dr. Aranya Chakrabortty in 2019 Conference on Information Sciences and Systems (CISS), Johns Hopkins University, MD. 

 T1. Invited student talk on "Reduced-dimensional Reinforcement Learning Control for Time-scale Separated Dynamical Systems'' in Southeast Controls Conference 2019 at Georgia Tech.

Get in touch at email