Sponsors

Institutional supporter

Publisher

FIRA & TAROS Plenary Speakers

(Speakers in alphabetic order)

TAROS IET Public Lecture
:   Era of Social Robots


Professor Shuzhi Sam Ge

IEEE, IET & IFAC Fellow, The National University of Singapore/University of Electronic Science and Technology of China.
 

Professor Sam S Ge received the B.Sc. degree from Beijing University of Aeronautics and Astronautics (BUAA), China, in 1986, and the Ph.D. degree and the Diploma of Imperial College (DIC) from Imperial College of Science, Technology and Medicine, University of London, London, United Kingdom, in 1993.  From May 1992 to June 1993, he was engaged in postdoctoral research at Leicester University, UK.  He has been with the Department of Electrical & Computer Engineering, The National University of Singapore, Singapore from July 1993. He is a full Professor since January 2005. Since July 2009, Professor Sam S. Ge is also Professor at the University of Electronic Science and Technology of China (UESTC).  Professor Ge is currently the Director of Social Robotics Lab, Institute of Interactive Digital Media (IDMI), National University of Singapore (NUS). He is leading a strong research team working on autonomous robotics (sensor fusion, path planning, decision making), intelligence control, intelligent interactive media fusion, and education software development.


Summary :

Social robots are envisioned to become an integral part of our social fabric as we embrace the coming “silver” society. Across continents, governments have foreseen the needs of an aging population and funded a number of leading groups in universities and research institutes to focus on the research and development of social robots for improving services, healthcare and productivity.

In this talk, I will first present a brief history of social robotics and the state-of-the-art in research and development. Then, I will introduce three social robots developed at the Social Robotics Laboratory of the National University of Singapore, including Carine for interactive edutainment, Adam for hospitality service, and Nancy for human robot collaboration. To appreciate the advanced technologies in making social robots possible, I shall describe the core technological modules, or building blocks, including locomotion, intelligent control, visuoauditory interaction, and artificial skin. Deeper understanding and further advancements in the modules can help us to make social robots become more appealing, more engaging, and ultimately better companions.

As a community, we are working hard to make social robots help our “silver” society more vibrant, connected, and productive.  Let me illustrate part of this big picture by presenting a few robots for different issues such as security, mobility, and productivity. Finally, I would like to conclude my presentation by giving a few futuristic scenarios how human and social robots live, work, and enjoy life together happily.

 
Closing Plenary: Joint Action between Humans and Robots - Lessons Learned from more than 10 years of Research
 
Prof. Dr.-Ing. habil. Alois Knoll
Institut für Informatik VI Technische Universität München, Germany.

Alois C. Knoll received the diploma (M.Sc.) degree in Electrical/Communications Engineering from the University of Stuttgart, Germany, in 1985 and his Ph.D. (summa cum laude) in computer science from the Technical University of Berlin, Germany, in 1988. He served on the faculty of the computer science department of TU Berlin until 1993, when he qualified for teaching computer science at a university (habilitation). He then joined the Technical Faculty of the University of Bielefeld, where he was a full professor and the director of the research group Technical Informatics until 2001. Between May 2001 and April 2004 he was a member of the board of directors of the Fraunhofer-Institute for Autonomous Intelligent Systems. At AIS he was head of the research group "Robotics Construction Kits", dedicated to research and development in the area of educational robotics. Since autumn 2001 he has been a professor of Computer Science at the Computer Science Department of the Technische Universität München. He is also on the board of directors of the Central Institute of Medical Technology at TUM (IMETUM-Garching); between April 2004 and March 2006 he was Executive Director of the Institute of Computer Science at TUM.



Summary :

The success of the human species depends heavily on our ability to work together on challenges, which individual humans cannot solve on their own. Therefore, humans are experts on closely working together on common tasks. But can we also build robots that are able to work together with humans?

This talk summarises the work of several projects that focused on joint action between humans and robots: JAST, Joint Action Science and Technology, JAHIR, Joint Action for Humans and Industrial Robots, BAJA, Basic Aspects of Joint Action, and JAMES, Joint Action for Multimodal Embodied Social Systems. We will show how robots, which will working together with humans, must be designed and constructed. In order to do this, we analysed how humans work together and transferred this knowledge to various robots. This includes the robot's software architecture, robust multimodal input recognition and processing, mechanisms to guarantee the human's safety, and generation of multimodal output to the human.


Opening Plenary: Reinforcement Methods and Graphical Games for Autonomous Online Learning in Robotic Systems

Professor Frank L. Lewis
IEEE, IFAC & INSTMC(UK) Fellow
The University of Texas at Arlington, USA

Professor Frank L Lewis is a registered Professional Engineer in the State of Texas and a Chartered Engineer with the U.K. Engineering Council.   He spent six years in the U.S. Navy. He is/has been on the Editorial Boards of numerous international journals and served as Editor for Automatica. He is the Editor of the Taylor & Francis book series in Automation & Control Engineering.  Dr. Lewis is the recipient of a Fulbright Research Award, the Int. Neural Network Society Gabor Award, the American Society of Engineering Education F.E. Terman Award, and numerous other research awards.  He received the IEEE Control Systems Society Best Chapter Award (as Founding Chairman). He received the IEEE Dallas Section "Outstanding Service Award" in 1994, and was selected as Engineer of the Year in 1995 by the Ft. Worth IEEE Section. He is an Adjunct Professor at The Georgia Institute of Technology and has served on NASA Committee on the Space Station.


Summary :

In this talk we present some notions of reinforcement learning in autonomous robotic systems.  Then, some ideas are given about a new form of team game where the interactions of the agents are restricted by an underlying communication graph topology.

Reinforcement learning (RL) is a method of learning better control actions by observing the responses to our current actions.  RL is based on the way natural organisms and animals learn in response to their environment.  Ivan Pavlov used precepts of RL in training his dogs in the 19-th century.  These learning methods capture multi-timescale cognitive phenomena in the human brain.  In RL, a desired performance measure is specified, and techniques are given for updating control actions so as to improve that prescribed performance measure.  Performance measures can capture minimum energy, minimum time motion, minimum fuel, maximum cost benefit, and so on.  

In this talk we present methods for learning optimal robot behaviors online using reinforcement methods.  Novel methods of using RL for updating the control inputs in dynamical system models are presented.  These are online learning methods based on RL actor-critic structures that update control actions so as to learn optimal motion solutions in real time using data measured along the system trajectories.  Thus, perception is used to learn skills autonomously during run-time operation. 

In the second part of the talk, a new formulation for control of multi-agent cooperative robots is given.  A novel form of game among agents in a communication graph is formulated.  In this graphical game, each agent is allowed to interact only with its neighbors, and yet optimal global performance of the team is desired.  Local interactions are used to learn optimal motions that result in synchronized behavior of the team.  Some relations are shown with cooperative swarm motion control and with human panic behavior in building egress situations.







Comments