Associate professor of computer science at the University of Rochester
Title: Upskilling the Future Workforce Using AI and Affective Computing
Abstract: Many people fear automation. They may see it as a potential job killer. They may also be concerned about what can be automated. Could we train a computer to amplify human ability? Should we?
Our ability to be creative, use social learning and imitate sets us apart from other species. In this talk, I will provide some examples of how we can use these social theories to guide algorithms to 1) identify interpretable behavioral patterns; 2) confirm, elaborate and design interventions to better understand how the real-world works; 3) deploy the solutions to measure impact. By carefully designing appropriate applications of Human-Centered AI, we show that technology can improve important social and cognitive skills for many, including the disadvantaged, ill, disabled, and other individuals who struggle with socio-emotional communication, such as those with autism, severe anxiety, neurodegenerative disease, and terminal illness.
In this talk, I will offer insights gained from our exploration of several questions: How are humans able to improve important social and cognitive skills with an intelligent system? What aspect of the feedback helps the most? How can such systems be deployed to promote equity and access to healthcare?
Bio: Ehsan Hoque is an associate professor of computer science at the University of Rochester, where he leads the Rochester Human-Computer Interaction (ROC HCI) Group. From 2018-2019, he was also the Interim Director of the Goergen Institute for Data Science. Ehsan earned his Ph.D. from MIT in 2013, where the MIT Museum highlighted his dissertation/patent — the development of an intelligent agent to improve human ability — as one of MIT’s most unconventional inventions. Building on the patent, Microsoft released “Presenter Coach” in 2019 to be integrated into PowerPoint.
His group’s work has been recognized by NSF CRII, NSF CAREER, and MIT TR35, as well as a commendation in Science News as one of ten early- to mid-career scientists to watch in 2017. In 2020, Ehsan was recognized as one of the emerging leaders in health and sciences by the National Academy of Medicine (NAM). He is an associate editor of IEEE Transactions on Affective Computing (2015-2019), PACM IMWUT (2016-current), and Digital Biomarkers (2018-current). Ehsan is an inaugural member of the ACM’s Future of Computing Academy.
Associate Professor at CMU Language Technology Institute
Title: Multimodal AI: Understanding Human Behaviors
Abstract: Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements in multimodal machine learning, addressing five core challenges: representation, alignment, fusion, translation and co-learning.
Bio: Louis-Philippe Morency is Associate Professor in the Language Technology Institute at Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He was formerly research faculty in the Computer Sciences Department at University of Southern California and received his Ph.D. degree from MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. He received diverse awards including AI’s 10 to Watch by IEEE Intelligent Systems, NetExplo Award in partnership with UNESCO and 10 best paper awards at IEEE and ACM conferences. His research was covered by media outlets such as Wall Street Journal, The Economist and NPR.