Invited Speakers

Dr. Bonnie J. Dorr

Florida Institute for Human & Machine Cognition (IHMC)

Dr. Bonnie Dorr is a Senior Research Scientist and Associate Director of the Institute for Human and Machine Cognition (IHMC) of Ocala, Florida, as well as Professor Emerita in the Department of Computer Science at the University of Maryland.   She is a former DARPA Program Manager of Human Language Technology, and also served as Maryland’s Associate Dean for the  College of Computer, Mathematical, and Natural Sciences (CMNS).  She co-founded the Computational Linguistics and Information  Processing (CLIP) Laboratory in the Institute for Advanced Computer Studies, where she served as a director for 15 years.   She was also Principal Scientist for two years at the Johns Hopkins University Human Language Technology Center of  Excellence (HLTCOE).  For 30+ years, she has been conducting research in several areas of broad-scale multilingual processing, e.g., machine translation, summarization, and deep language understanding.  She is a Sloan Fellow, a NSF  Presidential Faculty (PECASE) Fellow, former President of the Association for Computational Linguistics (2008), a Fellow of  the Association for Advancement of Artificial Intelligence (2013), and a recently inducted member of the Leadership Florida  Class of XXXIII (2014-2015).

Talk Title: Human-Centered Research in Extended Ambient Intelligence Environments

Abstract: Systems that help humans with difficult or tedious tasks requiring intelligence (e.g., translation of language, causal reasoning in medicine, or autonomous driving) are central to the field of artificial intelligence.  

A common viewpoint associated with such systems is that humans are expected to adapt to machines, rather than the other way around.  This viewpoint presumes a “one size fits all” approach that ignores the need for

adaptability to user-specific preferences, intentions, beliefs, and abilities.  Research at IHMC takes into account user-specific facets in the design of technology for understanding and assisting humans with a range of 

human modalities, including sensory perception, motion and action, and multi-party interchanges.  This talk focuses on  communication agents for assistive technology, including understanding and adapting to 

progressively impaired speech, as well as additional human-centered research at IHMC, e.g., deep language understanding, natural language dialogue, humanoid robotics, and exoskeleton research.

Dr. Tom Mitchell

Carnegie Mellon University

Tom M. Mitchell founded and chairs the Machine Learning Department at Carnegie Mellon University, where he is the E. Fredkin University Professor.  His research uses machine learning to develop computers that are learning to read the web, and uses brain imaging to study how the human brain understands what it reads.  Mitchell is a member of the U.S. National Academy of Engineering, a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow and Past President of the Association for the Advancement of Artificial Intelligence (AAAI).  He believes the field of machine learning will be the fastest growing branch of computer science during the 21st century.

Talk Title: Never-Ending Language Learning

Abstract: We will never really understand the process of learning from experience, until we can build machines that learn many different things, over years, and become better learners over time.  We describe our research to build a Never-Ending Language Learner (NELL) that runs 24 hours per day, forever, learning to read the web.  Each day NELL extracts (reads) more facts from the web, into its growing knowledge base of beliefs.  Each day NELL also learns to read better than the day before. NELL has been running 24 hours/day for over five years now. The result so far is an increasingly competent reader, and a collection of over 90 million interconnected beliefs (e.g., servedWtih(coffee, applePie)) NELL is considering at different levels of confidence.  NELL is also now learning to reason over its extracted knowledge, and to automatically extend its ontology.  Most recently, we are adding to NELL the ability to analyze images, work with non-English languages, and self-reflect on its current competence and shortcomings.

Dr. Peter Stone

University of Texas at Austin

Dr. Peter Stone is the David Bruton, Jr. Centennial Professor of Computer Science at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents' Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone's research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs - Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, and in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35.

Talk Title: Practical RL: Representation, Interaction, Synthesis, and Mortality (PRISM)

Abstract: For more than two decades, temporal difference Reinforcement Learning (RL) has received a lot of attention as a theoretically grounded approach to learning behavior policies in sequential decision making tasks from online experience.  Based on the theory of Markov Decision Processes and dynamic programming, important properties have been proven regarding its convergence to globally optimal policies under a variety of assumptions.  However, when scaling up RL to large continuous domains with imperfect representations and hierarchical structure, we often try applying algorithms that are proven to converge in small finite domains, and then just hope for the best.  Drawing on several different research threads within the Learning Agents Research Group at UT Austin, I will discuss four types of issues that arise from these contraints and opportunities: 1) Representation - choosing the algorithm for the problem's representation and adapating the representation to fit the algorithm; 2) Interaction - with other agents and with human trainers; 3) Synthesis - of different algorithms for the same problem and of different concepts in the same algorithm; and 4) Mortality - the opportunity to improve learning based on past experience and the constraint that one can't explore exhaustively.  Within this context, I will focus on two specific RL approaches, namely the TEXPLORE algorithm for real-time sample-efficient reinforcement learning for robots; and layered learning, a hierarchical machine learning paradigm that enables learning of complex behaviors by incrementally learning a series of sub-behaviors. TEXPLORE has been implemented and tested on a full-size fully autonomous robot car, and layered learning was the key deciding factor in our RoboCup 2014 3D simulation league championship.

 

Special Track Invited Speakers

Dr. Andrew Olney

University of Memphis

Dr. Andrew Olney presently serves as Associate Professor in both the Institute for Intelligent Systems and Department of Psychology and as Director of the Institute for Intelligent Systems at the University of Memphis. Dr. Olney received a B.A. in Linguistics with Cognitive Science from University College London in 1998, an M.S. in Evolutionary and Adaptive Systems from the University of Sussex in 2001, and a Ph.D. in Computer Science from the University of Memphis in 2006. His primary research interests are in natural language interfaces. Specific interests include vector space models, dialogue systems, unsupervised grammar induction, robotics, and intelligent tutoring systems.

Talk Title:  Building a BrainTrust

Abstract:  Knowledge engineering is one of the most challenging aspects of creating AI systems. Typically a small number of highly motivated experts work many hours to engineer even a small system. But what if you could replace those experts with novices? In this talk I will describe our ongoing efforts to do just that. BrainTrust is a system in which students create knowledge representations while they read their textbooks and complete reading comprehension tasks. Unlike most human computation systems, BrainTrust attempts to manage the tasks given to users so that they will not only produce viable knowledge representations but also learn in the process.

Dr. Bamshad Mobasher

DePaul University

Dr. Bamshad Mobasher is a Professor of Computer Science and the director of the Center for Web Intelligence at the School of Computing of DePaul University in Chicago. His research areas include Web mining, Web personalization, recommender systems, predictive user modeling, and information retrieval. He has published five edited books as well as more than 170 scientific articles, including several seminal papers in Web mining and Web personalization that are among the most cited in these areas. He has served as an organizer and on the program committees of numerous conferences, including as a program chair and steering committee member of the ACM International Conference on Recommender Systems, program chair for the International Conference on User Modeling, Adaptation and Personalization, and local organizing chair for the ACM Conference on Knowledge Discovery and Data Mining. As the director of the Center for Web Intelligence, Dr. Mobasher is directing research in Web mining, predictive analytics, and personalization, as well as overseeing several related joint projects with the industry. Dr. Mobasher serves as an associate editor for the ACM Transactions on the Web, ACM Transactions on Intelligent Interactive Systems and the ACM Transactions on Internet Technology. His has served on the editorial boards of several other prominent computing journals, including User Modeling and User-Adapted Interaction, and the Journal of Web Semantics.

Talk Title: Context Inference and Adaptation in Recommender Systems

Abstract. Recommender systems have become essential tools to alleviate information overload in many application areas by tailoring their recommendations to users’ personal preferences. Users' interests in items, however, may change over time depending on their current situation or context. Without considering context, recommendations may match the general preferences of a user, but they may have small utility for the user in his/her current situation. Little agreement exists among researchers as to what constitutes context, but its importance seems undisputed. In psychology, a change in context during learning has been shown to have an impact on recall. Research in linguistics has shown that context plays the important role of a disambiguation function. More recently, a variety of approaches and architectures have emerged for incorporating context or situational awareness in the recommendation process.  In this talk, I will provide a brief overview of the problem of contextual recommendation and some of the recently proposed solutions. I will then focus on recent recommendation approaches for modeling “interactional context,” where context is not directly represented using a pre-specified set of explicit variables, but is inferred based on observations of users’ behavior in their ongoing interactions with the system. I will highlight the use of latent factor modeling as well as social annotations, such as collaborative tagging, as the basis for inferring context. I will also describe an approach based on the multi-armed bandit strategy and change-point analysis in order to incrementally adapt recommendations to changes in context.

Dr. Cory Butz

University of Regina

Dr. Cory J. Butz received his Ph.D. degree in computer science from the University of Regina, Regina, SK, Canada, in 2000. He then joined the School of Information Technology and Engineering at the University of Ottawa, Ottawa, ON, Canada, as an Assistant Professor. In 2001, he returned to the Department of Computer Science at the University of Regina, where he now holds the rank of Professor and serves as the Associate Dean (Research and Graduate Studies) in the Faculty of Science. Currently, he is Vice President of the Canadian Artificial Intelligence Association / Association pour l'intelligence artificielle au Canada (CAIAC). His research findings on Bayesian networks have drawn invitations to visit Google, MIT, and the University of Cambridge.

Talk Title:  Introducing Darwinian Networks

Abstract:  Darwinian networks (DNs) are introduced to simplify and clarify working with Bayesian networks (BNs). Rather than modelling the variables in a problem domain, DNs represent the probability tables in the model. The graphical manipulation of the tables then takes on a biological feel. It is shown how DNs can unify modeling and reasoning tasks into a single platform.