AAAI'14 Workshop on Cognitive Computing for Augmented Human Intelligence

Québec City, Québec, Canada; July 27, 2014

Description

The workshop seeks to augment human decision making by exploiting synergies across two areas of AI research where exciting research progress has been made in recent years, but which so far have not had an explicit common venue. The first area has to do with powerful new learning techniques that may have the potential to automatically learn complex tasks by directly training on massive amounts of raw data, much of which may be unlabeled, unstructured, and multi-modal in form (natural language text/speech, audio, video, etc.). These techniques include deep learning, manifold learning, sparsity-based techniques, and transfer/cross-modal learning and inference methods. Researchers employing such techniques have recently achieved quantum performance leaps in speech and image recognition tasks, and have also demonstrated the ability to learn complex feature representations entirely from unlabeled data. The second area has to do with enabling computers to understand and work with naturalistic input from humans, in the form of natural language speech or text, visual input such as gestures or facial expressions, and haptic (touch-based) inputs. The most exciting demonstrations of these capabilities in the last few years include Question-Answering systems such as Watson and Wolfram Alpha, and commercially deployed personal assistant technology such as Siri, Google Now, Dragon Mobile Assistant, Nina, and TellMe. Synergistic advances in these two trends could vastly improve human decision making in many scenarios, including information overload (e.g., driving), cognition impairment (e.g., Alzheimer’s) or collective (multi-objective) decision-making (e.g., conference program scheduling, disaster response).

“Cognitive Computing” is an emerging research topic inspired by a vision of how the unification described above could lead to a new generation of computing systems enabling genuine human-machine collaboration. According to this vision, we may soon be able to build computing systems capable of understanding high-level objectives specified by humans in a natural language, autonomously learning how to achieve the objectives from data in the domain, reporting results back to humans, and iterating the interactions via sequential dialog until the objectives are achieved. As building and deploying such systems may require major platform improvements with respect to size, power usage, etc., there is also a significant focus in Cognitive Computing on alternative hardware, such as brain-inspired or other non-von Neumann architectures.

Unlike expert systems of the past, which required inflexible I/O and hard-coded expert rules, Cognitive Computing systems will process natural language and unstructured data and learn by experience, much in the same way humans do. They will utilize deep domain expertise to provide decision support and help humans make better decisions based on available data, whether in healthcare, finance or customer service.

In traditional AI, humans are not part of the equation, yet in cognitive computing, humans and machines work together. To enable natural interaction, cognitive computing systems use image and speech/audio recognition as eyes and ears to perceive the world and interact more seamlessly with humans. By using visual analytics and data visualization techniques, cognitive computers can display insights from data in a visually compelling way. This sets up a feedback loop wherein machines and humans may learn from each other.

In this context, the aim of the workshop is to draw the attention of the AI community to three primary  research challenges of cognitive computing:
1. Innovative hardware systems to support cognitive functions (possibly inspired by neuronal “wetware”) 
2. Cognitive experience interfaces (speech/vision/touch) 
3. Software systems that: 
    a. Emulate automatic learning of cognitive functions in humans (reasoning, perception, communication, goal-seeking, etc.)
    b. Emulate actual neurophysiological mechanisms and algorithms that support human cognition

This, we argue, will lead to better AI systems that can operate along with humans to synergistically do tasks that each are best capable of.
Topics of interest include, but not restricted to, are:
1. What does cognitive computing mean to AI researchers? 
2. What does cognitive computing mean to Neuroscience researchers? 
3. What does cognitive computing mean to Hardware researchers? 
4. What are the differences of Cognitive Computing from AI and what are the new sets of challenges? 
5. What are the test beds of cognitive computing? 
6. What are the early applications of cognitive computing systems? 
7. What are the early architectures that allow for the closed cognitive loop, from sensors to actions? 
8. What are the emerging machine learning technologies that address the big data challenges implied by cognitive computing applications? 
9. What are the early augmented cognition technologies? 
10. How can cognitive computing techniques improve human computation, and what demands do the latter put on the former? 
11. Ethical and legal aspects of machine-suggested actions


Workshop Plan

Workshop format: The workshop will consist of demo and poster presentations, a panel,  invited talks, and discussion sessions, in a one full day schedule. The panel will focus on connecting the AI researchers to the various challenges that the targeted domain brings.

Submission format: All papers submissions must be in AAAI format. They can be of two types. The first is regular research papers, which can be up to 6 pages long + 1 for reference, and are expected to present a significant contribution. The second is short submission of up to 4 pages + 1 for reference, which describes a position on the topic of the workshop or a demonstration/tool.

Submission site: Papers are to be submitted online at https://www.easychair.org/conferences/?conf=cgahi2014 We request interested authors to login and submit abstracts as an expression of interest before the actual deadline (April 10th)


Important Dates 

April 17, 2014: Paper Submission Deadline

May 10, 2014: Notification Decision

May 15, 2014: Camera Ready Due

July 27-28, 2014: Workshop date


The Organizers

Biplav Srivastava , IBM Research – India, New Delhi, India; Email: sbiplav AT in.ibm.com  (Designated Contact)

Aurelie Lozano, IBM TJ Watson Research Center, USA; Email: aclozano AT us.ibm.com

Janusz Marecki, IBM TJ Watson Research Center, USA; Email: marecki AT us.ibm.com

Irina Rish, IBM TJ Watson Research Center, USA; Email: rish AT us.ibm.com

Ruslan Salakhudtinov, University of Toronto, Canada; Email:  rsalakhu AT cs.toronto.edu

Gerald Tesauro, IBM TJ Watson Research Center, USA; Email: gtesauro AT us.ibm.com

Manuela Veloso, Carnegie Mellon University, USA; Email: mmv AT cs.cmu.edu



Accepted Papers 


1 Hui-Ju Katherine Chiang, Shih-Han Wang and Jane Yung-Jen Hsu "Efficiently Retrieving Images that We Perceived as Similar" (Short paper)

2 Jonathan Dunn, Jon Beltran de Heredia, Maura Burke, Lisa Gandy, Sergey Kanareykin, Oren Kapah, Matthew Taylor, Dell Hines, Ophir Frieder, David Grossman, Newton Howard, Moshe Koppel, Scott Morris, Andrew Ortony and Shlomo Argamon "Language-Independent Ensemble Approaches to Metaphor Identification"    (Short paper)

3 Jakob Grundström and Pierre Nugues    "Using Syntactic Features in Answer Reranking"    (Short paper)

4 Rebecka Weegar, Linus Hammarlund, Agnes Tegen, Magnus Oskarsson, Kalle Åström and Pierre Nugues "Visual Entity Linking: A Preliminary Study" (Short paper)

5 Vishnu Nath "Solving 3D mazes with Machine Learning and Humanoid Robots" (Full paper)

6 Daniel Schlegel and Stuart Shapiro    "Inference Graphs: A New Kind of Hybrid Reasoning System"     (Full paper)

7 Julia Taylor and Victor Raskin "On the Nature of Composable Properties" (Short paper)

8 Yu-Ting Li and Juan Wachs "A Bayesian Approach to Determine Focus of Attention in Spatial and Time-Sensitive Decision Making Scenarios" (Full paper)

9 Yuetan Lin, Shu Kong, Donghui Wang and Yueting Zhuang    "Saliency Detection within a Deep Convolutional Architecture" (Full paper)

10 Steve Heisig, Guillermo Cecchi, Ravi Rao and Irina Rish "Augmented Human: Human OS for Improved Mental Function"     (Short paper)


Invited Talks by: 


Prof. Yoshua Bengio (University of Montreal) "Challenges of Deep Learning towards AI" (slides)

Prof. Milind Tambe (University of Southern California) "Human Adversaries in Security Games: Integrating Models of Bounded Rationality and Fast Algorithms" (slides)

Prof. Bonnie E. John (Carnegie Mellon University) "The User Experience of Cognitive Systems" (slides)

Prof. Pat Langley (University of Auckland) "The Cognitive Systems Paradigm" (slides)

Richard Socher PhD (Stanford University) "Recursive Deep Learning for Modeling Compositional and Grounded Meaning" (slides)

 

Detailed Workshop Schedule:

Session 1 (120 mins)

- [08.30-08.40] Welcome and general Introduction

- [08.40-09.20]  Invited talk: Prof. Yoshua Bengio "Challenges of Deep Learning towards AI"

- [09.20-10.00]  Invited Talk: Prof. Bonnie E. John "The User Experience of Cognitive Systems"

- [10.00-10.20]  Vishnu Nath "Solving 3D mazes with Machine Learning and Humanoid Robots"

- [10.20-10.25]  Hui-Ju Katherine Chiang, Shih-Han Wang and Jane Yung-Jen Hsu "Efficiently Retrieving Images that We Perceived as Similar"

- [10.25-10.30]  Jonathan Dunn, Jon Beltran de Heredia, Maura Burke, Lisa Gandy, Sergey Kanareykin, Oren Kapah, Matthew Taylor, Dell Hines, Ophir
Frieder, David Grossman, Newton Howard, Moshe Koppel, Scott Morris, Andrew Ortony and Shlomo Argamon "Language-Independent Ensemble Approaches to Metaphor Identification"

[10.30-11.00] Coffee Break + poster session

Session 2 (90 mins)

- [11.00-11.40] Invited talk: Prof. Milind Tambe "Human Adversaries in Security Games: Integrating Models of Bounded Rationality and Fast Algorithms"

- [11.40-12.00]   Jakob Grundström and Pierre Nugues "Using Syntactic Features in Answer Reranking"

- [12.00-12.05]   Daniel Schlegel and Stuart Shapiro "Inference Graphs: A New Kind of Hybrid Reasoning System"

- [12.05-12.10]   Rebecka Weegar, Linus Hammarlund, Agnes Tegen, Magnus Oskarsson, Kalle Åström and Pierre Nugues "Visual Entity Linking: A
Preliminary Study"

- [12.10-12.30]   Open discussion

[12.30 - 14.00] Lunch

Session 3 (90 mins)

- [14.00-14.40]  Invited talk: Prof. Pat Langley "The Cognitive Systems Paradigm"

- [14.40-15.00]  Yu-Ting Li and Juan Wachs "A Bayesian Approach to Determine Focus of Attention in Spatial and Time-Sensitive Decision Making
Scenarios"

- [15.00-15.05]  Julia Taylor and Victor Raskin "On the Nature of Composable Properties"

- [15.05-15.10]  Steve Heisig, Guillermo Cecchi, Ravi Rao and Irina Rish "Augmented Human: Human OS for Improved Mental Function"

- [15.10-15.30]  Demos

[15.30 - 16.00] Coffee Break + poster session

Session 4 (90 mins)

- [16.00-16.40]  Invited talk: Richard Socher PhD "Recursive Deep Learning for Modeling Compositional and Grounded Meaning"

- [16.40-17.00]  Yuetan Lin, Shu Kong, Donghui Wang and Yueting Zhuang "Saliency Detection within a Deep Convolutional Architecture"

- [17.00-17.30]  Open discussion