Sadman Sakib Enan
PhD & MS in CS @UMN | Machine Vision, Robotics
“The beautiful thing about learning is that nobody can take it away from you.”
― B.B. King
About Me
I am from Bangladesh, a small country full of emotion and a big fat heart, located at the northern apex of the mighty Bay of Bengal. I am passionate about cars, soccer, cricket, and music. I enjoy making new friends, learning new things, and exploring new places.
NEWS
(Apr'24) Defended my doctoral dissertation titled "Vision-Based Computational Methods Towards Effective Underwater Multi-Human-Robot Interaction".
(Jan'24) Paper got accepted for publication in IEEE Robotics and Automation Letters (RA-L).
(Aug'23) Passed Doctoral Dissertation Proposal Exam.
(Jun'22) Paper got accepted for presentation at IROS 2022, Kyoto, Japan. (nomination: selected as Best Paper Award Finalist on Cognitive Robotics)
(Dec'21) Awarded with Master of Science in Computer Science and Engineering from the University of Minnesota.
(May'21) Started to work at 3M as an R&D Graduate Data Science Intern.
(May'21) Passed Written & Oral Prelim Exams, Passed Master's Final Exam.
(Dec'20) Received the UMII-MnDRIVE Graduate Assistantship Award ($53,729).
(Jun'20) Workshop Paper got accepted at POGO RSS 2020.
(Jun'20) Paper 1 and Paper 2 got accepted for presentation at IROS 2020, Las Vegas, NV.
(Jan'20) Paper got accepted for presentation at ICRA 2020, Paris, France.
(Sep'19) Our recent work on underwater image super-resolution is available on github. The dataset can be found here.
Work Interest
Develop end-to-end frameworks for autonomous robots using computer vision and artificial intelligence algorithms
Model implementation and validation on real robotic platforms or embedded GPUs (Nvidia Jetsons)
Pose estimation, Segmentation networks, Generative models, Transformer architectures
Selected Projects
Diver Identification Using Anthropometric Data Ratios (2024 RA-L)
Recent advances in efficient design, perception algorithms, and computing hardware have made it possible to create improved human-robot interaction (HRI) capabilities for autonomous underwater vehicles (AUVs). To conduct secure missions as underwater human-robot teams, AUVs require the ability to accurately identify divers. However, this remains an open problem due to divers' challenging visual features, mainly caused by similar-looking scuba gear. In this paper, we present a novel algorithm that can perform diver identification using either pre-trained models or models trained during deployment. We exploit anthropometric data obtained from diver pose estimates to generate robust features that are invariant to changes in distance and photometric conditions. We also propose an embedding network that maximizes inter-class distances in the feature space and minimizes those for the intra-class features, which significantly improves classification performance. Furthermore, we present an end-to-end diver identification framework that operates on an AUV and evaluate the accuracy of the proposed algorithm. Quantitative results in controlled-water experiments show that our algorithm achieves a high level of accuracy in diver identification.
Project page: https://irvlab.cs.umn.edu/projects/human-robot-collaboration/diver-identification-using-anthropometric-data-ratios
Robotic Detection of a Human-Comprehensible Gestural Language (2022 IROS BEST PAPER FINALIST)
We present a motion-based robotic communication framework that enables non-verbal communication among autonomous underwater vehicles (AUVs) and human divers. We design a gestural language for AUV-to-AUV communication which can be easily understood by divers observing the conversation - unlike typical radio frequency, light, or audio-based AUV communication. To allow AUVs to visually understand a gesture from another AUV, we propose a deep network (RRCommNet) which exploits a self-attention mechanism to learn to recognize each message by extracting maximally discriminative spatio-temporal features. We train this network on diverse simulated and real-world data. Our experimental evaluations, both in simulation and in closed-water robot trials, demonstrate that the proposed RRCommNet architecture is able to decipher gesture-based messages with an average accuracy of 88-94% on simulated data and 73-83% on real data.
Project page: https://irvlab.cs.umn.edu/projects/human-robot-collaboration/robotic-detection-human-comprehensible-gestural-language
Underwater Image Super-Resolution using Deep Residual Multipliers (2020 ICRA)
We present a deep residual network-based generative model for single image super-resolution (SISR) of underwater imagery for use by autonomous underwater robots. We also provide an adversarial training pipeline for learning SISR from paired data. In order to supervise the training, we formulate an objective function that evaluates the perceptual quality of an image based on its global content, color, and local style information. Additionally, we present USR-248, a large-scale dataset of three sets of underwater images of ‘high’ (640 x 480) and ‘low’ (80 x 60, 160 x 120, and 320 x 240) resolution. USR-248 contains paired instances for supervised training of 2x, 4x, or 8x SISR models. Furthermore, we validate the effectiveness of our proposed model through qualitative and quantitative experiments and compare the results with several state-of-the-art models’ performances. We also analyze its practical feasibility for applications such as scene understanding and attention modeling in noisy visual conditions.
Project page: http://irvlab.cs.umn.edu/image-enhancement-and-super-resolution/srdrm-and-srdrm-gan
Design and Experiments with LoCO AUV: A Low Cost Open-Source Autonomous Underwater Vehicle (2020 IROS)
We present LoCO AUV, a Low-Cost, Open Autonomous Underwater Vehicle. LoCO is a general-purpose, single-person-deployable, vision-guided AUV, rated to a depth of 100 meters. We discuss the open and expandable design of this underwater robot, as well as the design of a simulator in Gazebo. Additionally, we explore the platform’s preliminary local motion control and state estimation abilities, which enable it to perform maneuvers autonomously. In order to demonstrate its usefulness for a variety of tasks, we implement a variety of our previously presented human-robot interaction capabilities on LoCO, including gestural control, diver following, and robot communication via motion. Finally, we discuss the practical concerns of deployment and our experiences in using this robot in pools, lakes, and the ocean. All design details, instructions on assembly, and code will be released under a permissive, open-source license.
Project page: http://irvlab.cs.umn.edu/other-projects/loco-auv
Selected Publications
SS Enan and J Sattar, "Semantically-Aware Diver Activity Recognition Framework for Effective Underwater Multi-Human-Robot Collaboration," (Under review), 2024.
J Hong*, SS Enan* (*equal contribution), and J Sattar, "Diver Identification Using Anthropometric Data Ratios for Underwater Multi-Human-Robot Collaboration," IEEE Robotics and Automation Letters (RA-L), 2024.
SS Enan and J Sattar, "Visual Detection of Diver Attentiveness for Underwater Human-Robot Interaction," (Under review), 2023.
SS Enan, M Fulton, and J Sattar, "Robotic Detection of a Human-Comprehensible Gestural Language for Underwater Multi-Human-Robot Collaboration," 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 2022, pp. 3085-3092.
C Edge, SS Enan, M Fulton, J Hong, J Mo, K Barthelemy, H Bashaw, B Kallevig, C Knutson, K Orpen, and J Sattar, "Design and Experiments with LoCO AUV: A Low Cost Open-Source Autonomous Underwater Vehicle," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, USA, 2020, pp. 1761-1768.
MJ Islam, SS Enan, P Luo, and J Sattar, "Underwater Image Super-Resolution using Deep Residual Multipliers," in 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 900-906.
Community
Reviewer
IEEE Robotics and Automation Letters (RA-L): 2021.
IEEE Journal of Oceanic Engineering (JOE): 2022.
IEEE International Conference on Robotics and Automation (ICRA): 2021, 2023, 2024.
International Conference on Intelligent Robots and Systems (IROS): 2020, 2022.
Member
IEEE