Collaborative Research: DARE: A Personalized Assistive Robotic System that Assesses Cognitive Fatigue in Persons with Paralysis - Funded by National Science Foundation - NSF: 2022 - 2025 (Estimated)
Objective: With the advancements in robotics and artificial intelligence, assistive robotic systems have the potential to provide support and care to people with Spinal Cord Injury (SCI). As robots become more widespread, like today’s mobile phones, assistive robots can play a significant role in assisting persons with disabilities at home, improving independence and everyday quality of life. The objective of this project is to design and develop an end-to-end personalized assistive robotic system, called iRCSA (Intelligent Robotic Cooperation for Safe Assistance), to recognize, assess, and respond to a human’s cognitive fatigue during human-robot cooperation. The focus of the system is on human-robot cooperative tasks where a human with SCI and a robot cooperate during daily tasks (e.g., cooking). Students who have experienced SCI will be involved in every stage of the project, to ensure the acceptability and usability of the proposed system. In addition to the significant impact of this research on the improvement of life independence for persons with disabilities, the project includes the development of new university courses for assistive technologies and summer school programs for K-12 students, so that students gain knowledge on robotics and assistive technologies for their prospective studies in Science, Technology, Engineering, and Math (STEM).
Related Publications: [1][2][3][4]
Videos: [1]
Collaborative Cooking [1]
Control Framework for a Socially Assistive Robot - Funded by SCU University Research Grant
Objective: The objective of the proposed project is to design and develop a control framework for a socially assistive robot that could be used for therapeutic interventions at school for children with DCD. A company has donated a socially assistive robot to the HMI^2 lab (see Photo on the right). The robot is equipped with a mobile base and a laser scanner, a big display/tablet, a small display on its head, and two arms. The robot’s hardware functions but there is no software framework. Therefore, the main goal of this project is to develop a Robot Operating System (ROS) framework that controls the robot’s actions, enabling it to move in the environment and move its arm.
For this project, an interdisciplinary team of undergrad students (1 mechanical engineer, 1 electrical and computer engineer, and 1 computer scientist) will work on developing the following tasks; (I) ROS-based Sensor/Motor Communication and Low-level Control and (II) High-Level Robot Control.
Socially Assistive Robot
Personalization in Human-Robot Interaction - Funded by Kuehler Undergraduate Engineering Research Award @ SCU School of Engineering
Objective: With the advancements in robotics and Artificial Intelligence (AI), robots have the potential to be part of our everyday lives and support us with everyday tasks. However, to ensure a great user experience, robots require to be personalized. This means that robots will require to adapt to the user’s personality, preferences, abilities, and needs. In this project, we will develop a framework that will enable a robot to learn personalized preferences during a collaborative cooking scenario.
Proposed Personalized Robot Learning Framework
Intelligent Hands-free Multimodal Interface for Human-Robot Interaction - Funded by SCU University Research Grant
Objective: The objective of the proposed project is to design and develop an intelligent “hands-free” multimodal interface that enhances the interaction between robots and persons that cannot use their hands. This interface has the potential to support people with quadriplegia, but also people who collaborate with robots (e.g. in industrial settings, warehouses, etc.) and whose hands are busy performing other tasks. The figure on the right shows an overview of the proposed research and its main two tasks; (I) Intelligent Hands-free Multimodal Interface, and (II) Automated Generation of Sequence of Robotic Actions.
Intelligent Hands-Free Multimodal Interface