Ashique KhudaBukhsh is an assistant professor at the Golisano College of Computing and Information Sciences, Rochester Institute of Technology (RIT). His current research lies at the intersection of NLP and AI for Social Impact as applied to: (i) globally important events arising in linguistically diverse regions requiring methods to tackle practical challenges involving multilingual, noisy, social media texts; (ii) polarization in the context of the current US political crisis; and iii) auditing AI systems and platforms for unintended harms.
Title: From Bollywood Son Preference to Moral Policing on Women in Iran – A 360° View of Gender Bias
Abstract: Do longitudinal studies reveal a skewed gender distribution among newborn babies depicted in Bollywood movies? Who dominates the speaking time in political conversations on 24x7 news networks in the United States—men or women }? How does Twitter discourse on gender equality evolve when a woman dies in police custody in Iran after being arrested (reportedly) due to improper headscarf-wearing? What is the representation of women in divorce court proceedings in India? This broad talk, where cutting-edge AI intersects with social science research questions, encompasses a diverse array of studies that unveil gender bias in various forms. In this presentation, I will describe the substantive findings, social impact, methodological challenges, scope for multimodal investigations, and the novelties entailed in this research. I will conclude the talk with our findings on worrisome gender bias in several large language models.
Soujanya Poria is an assistant professor of Computer Science at the Singapore University of Technology and Design (SUTD), Singapore. He holds a Ph.D. degree in Computer Science from the University of Stirling, UK. His main areas of research interest are Large Language Models, Multimodal AI, and Natural Language Processing. He leads DeCLaRe Lab which works on different challenging AI problems centered on Large Language Models (LLMs), Multimodal AI, Commonsense Reasoning, and more.
Title: Toward Understanding, Leveraging, and Improving Large Language Models
Abstract: The emergence of Large Language Models (LLMs) has marked a substantial advancement in Natural Language Processing (NLP), contributing significantly to enhanced zero-shot task performance. With these advancements, three research areas primarily have emerged: 1) The mechanism through which LLMs accomplish their tasks and their limitations, 2) Effectively harnessing the power of LLMs across diverse domains, and 3) Strategies for improving, and enhancing the performance of LLMs and making them more efficient. This presentation aims to delve into our research group's endeavors to address these pivotal questions. Firstly, I will outline our approach, which involves utilizing ontology-guided prompt perturbations to unravel the primary limitations of LLMs in solving mathematical and coding problems. Moving on to the second question, we will explore the utilization of synthetic data generated by LLMs to strengthen their performance in challenging downstream tasks, particularly focusing on structured prediction. Finally, I will elaborate on our initiatives aimed at improving LLMs by incorporating highly effective retrieval strategies, specifically addressing the prevalent challenge of hallucinations that often afflicts contemporary LLMs.
Tanmoy Chakraborty is an Associate Professor in the Department of Electrical Engineering at IIT Delhi, India. Additionally, he serves as an associate faculty member at the Yardi School of Artificial Intelligence, IIT Delhi. His broad research interests include Natural Language Processing, Graph Neural Networks, and Social Computing. His current research primarily focuses on enriching frugal language models (reasoning, knowledge grounding, prompting, editing, etc.) and applying them to various applications, including mental health and Cyber-informatics.
Title: Roles of Social Networks and Multimodality in Combating Online Malicious Activities
Abstract: Online social media platforms are popular mediums for disseminating and consuming information. Unfortunately, due to the decentralized generation and propagation of content, they also come with limited liability for harmful posts and collusive persuasion. It is often the experience that online users are subjected to a barrage of harmful posts (fake news, hate speech, collusive activities, etc.) within a short span of time. In this talk, I shall present one of our megaprojects, "Project Robinhood", which we envision as an all-in-one solution to curb the spread of malicious content on social media. In particular, I will present a series of our recent studies on combating fake news, hate speech, and collaborative activities. I will discuss how both reactive and proactive measures can help stop the spread of malicious content.