Invited Speakers

Conference invited speakers will address the entire FLAIRS audience in plenary sessions.

Ayanna HowardProfessor, Director of Human-Automation Systems (HumAnS) LabGeorgia Institute of TechnologyAyanna Howard is a Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where she has been on the faculty since 2005. She also holds an adjunct appointment in the School of Interactive Computing with the College of Computing at Georgia Tech. She received her B.S. from Brown University, her M.S.E.E. from the University of Southern California, and her Ph.D. in Electrical Engineering from the University of Southern California in 1999. Her area of research is centered around the concept of humanized intelligence, the process of embedding human cognitive capability into the control path of robotic systems. This work, which addresses issues of autonomous control as well as aspects of interaction with humans and the surrounding environment, has resulted in over 100 peer-reviewed publications in a number of projects – from scientific rover navigation in glacier environments to assistive robots for the home. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine. She also received the IEEE Early Career Award in Robotics and Automation in 2005, the Georgia-Tech Faculty Women of Distinction Award in 2008, and recognized as NSBE Educator of the Year in 2009. From 1993-2005, Dr. Howard was at NASA's Jet Propulsion Laboratory, California Institute of Technology. Following this, she joined Georgia Tech and founded the Human-Automation Systems Lab. She also serves as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech.

Title: Robotics and Assistive Technologies: Their Emerging Role in Healthcare

Abstract: Healthcare robotics refers to robots that are used to increase, maintain, or improve the functional capabilities of people with (and without) disabilities.  We can segment this domain into four primary areas - Rehabilitation Robotics, Robotics for Surgery, Biorobotics, and Assistive Robotics at Home, Work, Play. Through interaction, robotics for healthcare applications can increase the quality of life for older adults and/or people who experience disabling circumstances, by, for example, assisting in stroke-therapy, assisting surgeons in the operating room, or becoming therapeutic playmates for children with cerebral palsy.  There are numerous challenges though that must be addressed - determining the roles and responsibilities of both human and robot, developing interfaces for humans to interact with robots that does not require extensive training, and developing methods to allow the robot to learn from their human counterparts.  Applying such human-interaction methodologies enables a new era of progress in healthcare robotics. In this talk, I will discuss the domain of intelligent robotics for healthcare applications and supporting assistive technologies.  I will present approaches in which these technologies can address real-life needs for both improving quality of life as well as tackling rehabilitation and therapy objectives for individuals with (and without) disabilities.

David JohnsonFounder, CEO and President, DecoodaDecooda’s vision for delivering valuable social media and enterprise text analytics insights in real-time has been framed by David’s relentless client focus over his career.  At Decooda, David has organized an interdisciplinary team of big data experts, linguists, cognitive psychologists and artificial intelligence experts that are fixated on understanding the root cause of behavior and its impact on business outcomes on a global basis. Decooda is rapidly being recognized for developing the most innovative, flexible, and infinitely scalable big data and text analytics cloud solutions that help clients spot actionable insights and behaviors that matter most in order to achieve better consumer experiences.   Title: Crossing the Data Science Chasm: The perception of what data science is and what it needs to beAbstract: Businesses acknowledge that the accuracy with which they can understand their customers determines their success (or lack thereof). The challenge is that there are many customers, they have much to say, and they don’t necessarily say it clearly. To complicate matters further, customers experience a wide variety of feelings and attitudes toward the products and services they purchase and use. Therefore, the issue for the business is complex, but the goal is strikingly clear: in order to understand the customer, they need to listen to them, and not just what they have to say, but also why they feel they have to say it. Unfortunately, text analytics and artificial intelligence techniques have not kept pace with the needs of businesses. These weaknesses are visible daily in the form of irrelevant brand impressions and poor customer experiences. In this talk, I will use the domain of text analytics to discuss how the business world’s infatuation with “geek,” and emphasis on math, is moving businesses further from understanding the consumer, not closer. I will present examples of the harsh realities some of the largest businesses face as it relates to the implementation and maintenance of text analytics systems, including: experimental design, domain knowledge, resource constraints, collaborative alignment and technology. Further, I will discuss what businesses are demanding and what is at stake for them and their customers. I will conclude by presenting my point of view on what is needed to successfully begin to tackle these challenges at scale.

Cristina ConatiAssociate Professor of Computer Science at the University of British Columbia, CanadaCristina Conati is an Associate Professor of Computer Science at the University of British Columbia, Vancouver, Canada. She received a “Laurea” degree (M.Sc. equivalent) in Computer Science at the University of Milan, Italy (1988), as well as a M.Sc. (1996) and Ph.D. (1999) in Intelligent Systems at the University of Pittsburgh.  Dr. Conati’s research goal is to integrate research in Artificial Intelligence (AI), Cognitive Science and Human Computer Interaction (HCI) to make complex interactive systems increasingly more effective and adaptive to the users’ needs. Her areas of interest include Intelligent User Interfaces, User Modeling, User-adaptive Systems and Affective Computing. Her research has received awards from the International Conference on User Modeling, the International Conference of AI in Education, the International Conference on Intelligent User Interfaces (2007) and the Journal of User Modeling and User Adapted Interaction (2002). Dr. Conati is an Associate Editor for the Journal of AI in Education, for the IEEE Transactions on Affective Computing and for the ACM Transactions on Intelligent Interactive Systems.Title: Who are my users and how I can help them? The quest of user-adaptive interaction

Abstract: The Web and the rapidly increasing availability of sophisticated media have dramatically enhanced the potential of computers as interactive tools that support a large variety of users in a growing range of tasks. However, designing complex interactive systems that satisfy the needs of individual users from highly heterogeneous user groups is very difficult. This talk focuses on research that aims to overcome this difficulty by investigating how to devise interactive systems that can autonomously, dynamically and unobtrusively adapt to the specific needs of each individual user, namely research in User-Adaptive Interaction (UAI), a highly interdisciplinary field at the intersection of Artificial Intelligence (AI), Human Computer Interaction (HCI) and Cognitive Science. I will present some examples of UAI researcher that we are conducting in our laboratory, including how to devise User-Adaptive Visualizations, how to provide personalized support to learning from novel educational systems (e.g., interactive simulations and educational games), and how model relevant user properties in real time by leveraging advanced input sources such as eye-tracking.