Photo by Massimiliano Sarno on Unsplash
The research program will identify and study the ways that AI can both exacerbate and ameliorate social crises.
The research framework encompasses Trustworthy AI, Equity and AI, and AI Literacy. Research projects follow a convergent, interdisciplinary approach that views AI as part of larger, sociotechnical systems.
Trustworthy AI must be ethical and human-centric. Ethical AI requires including issues of transparency, trust and privacy in their core development and deployment. Human-centric requires the development of collaborative and meaningful interaction between humans and AI systems.
AI and Equity research investigates the potential and mitigation for AI systems to amplify discrimination and marginalize vulnerable populations. AI and algorithmic bias can exacerbate existing social inequalities in domains including government systems, policing, social services, and search engine results. Philosophical, sociological, and anthropological research can provide insights into fairness, manipulation, epistemic injustice, and historical contexts of oppression, ensuring that AI does not contribute to them.
AI literacy refers to the understanding and knowledge of AI concepts and technologies, as well as their societal implications. Developing AI literacy involves expanding public understanding of AI concepts, recognizing AI applications, knowing when to trust AI, considering ethics in AI, and promoting responsible AI use. Education and human-centered computing are actively researching AI literacy.
Please contact Gordon Hull <ghull@charlotte.edu> if you are interested in joining the Center. Contact <stadimal@charlotte.edu> for any support with Website quieries.
*This work supported by the National Science Foundation under Grant Number (#2334319 )