AB Lab Projects

HCAI @ Howard

The Institute for Socially and Culturally Relevant Human-Centered Artificial Intelligence's at Howard University (HCAI@Howard) has a vision to ensure all of humanity benefits from technology and that these benefits are broadly shared.

The Visibility Project

This project explores how racist, biased, and microaggressive language can be reduced in psychiatric professionals. The goal of the project is to foster empathy through an immersive experience.

Classifying Microaggressions in Text Using AI

This project explores empathy and microaggressions that humans experience in person-to-person speech. Microaggressions are brief, subtle, verbal, behavioral, or environmental indignities, which communicate negative prejudicial insults toward any group, particularly culturally marginalized groups. AI Models are rarely built containing multimodal examples that provide more context into abusive speech. We believe this research will help create smarter Diversity and Inclusion training tools.

This project is funded by the NSF, Amazon, and Salesforce.

Empathy & Design Thinking

This project explores the unique way empathy is exhibited from individuals in design thinking sessions through study of human behavior. This project is funded by the NSF.

Ear Biometric Recognition & Authentication (EBRA)

This project explores the improvement and analysis of the state-of-the-art of biometric identification and recognition using ear images. The research will involve an end-to-end study of ear segmentation, ear similarity, and existing datasets. Currently, the work is focused on creating a novel, inclusive dataset and researching the state-of-the-art of ear similarity.

Engaging Computer Science (CS) Students in Games for Social Change.

This project, funded by the NSF, takes elements from games for social change and interactive computing like Gaming, participatory design, and design thinking to teach undergraduate CS students how to perform computational thinking

Codeswitching

This project funded by the NSF, NSA, NGA and Northrop Grumman examines how codeswitching is done to hide sentiment of social media data