2nd International Workshop on Conversational Approaches to Information Retrieval (CAIR'18)

at

SIGIR 2018

Follow us: @CAIRWorkshop

Invited Speakers

Emory University

Title. Conversational Search with Voice-based Assistants: The case of News Search Enrico uh Suggestion


Abstract. Voice-based assistants such as Siri, and now Alexa, Cortana and Google Assistant, have rekindled the dream of a true conversation with a computer search engine. As is becoming increasingly clear, supporting search in a voice-based conversational setting requires us to revise user search models for intent inference, response ranking, and evaluation. The conversational setting also provides exciting opportunities for mixed-initiative search and recommendation. One particularly interesting case is of news search and suggestion, where a voice-based conversation can help a person be informed about what is happening in the world, while becoming and staying engaged. Our team is exploring these and related issues in the context of the Amazon Alexa Conversational AI Challenge, now in its second iteration. I'll describe some of our successes and failures, and interesting research directions we plan to explore after the Challenge.


Bio. Dr. Eugene Agichtein is an Associate Professor of Computer Science at Emory University, where he founded and leads the Intelligent Information Access Laboratory (IR Lab). Eugene's research spans the areas of information retrieval, natural language processing, data mining, and human computer interaction. A large part of this work was done in collaboration with researchers and engineers at Microsoft, Google, and Yahoo (now Oath). Eugene has co-authored over 100 publications, which have been recognized by multiple awards, including the A.P. Sloan fellowship and the 2013 Karen Spark Jones Award from the British Computer Society. Eugene was Program Co-Chair of the WSDM 2012 and WWW 2017 conferences. More information is at http://www.mathcs.emory.edu/~eugene/.

Title. Steps Toward Semantic Search


Abstract. We'll review some of the recent steps that are starting to make semantic search possible at scale. From a hardware and infrastructure perspective these are driven by exponential growth that is now growing beyond Moore's law. From a software perspective, the steps include strategies for learning from large amounts of unstructured data, automatically deriving implicitly semantic representations, learning from multitask and multilingual data, and the application of models to new tasks and domains not directly matched to training. We'll also describe recent products making use of this type of technology.


Bio. Brian Strope is a research scientist and manager for Ray Kurzweil's research efforts at Google. Earlier at Google, Brian helped build Google's speech recognition technology, and he worked for several years as a research engineer at Nuance Communications. His dissertation at UCLA was in auditory modeling, and he studied signal processing and music at Brown.