Professor of Philosophy and Public Policy | Director of the Center for Professional and Applied Ethics | Affiliate, School of Data Science
There’s a few. One of the obvious ones is equity - there’s all kinds of ways that AI systems can further marginalize folks who are already marginalized or vulnerable. If we don’t work very proactively to prevent that, it’s going to happen. It’s already happening. This is particularly true when AI gets used in systems - government or corporate - that make decisions about people, like whether they are entitled to social benefits, or credit, and so on. Those aren’t getting the attention they need. There’s also big problems with privacy, and those are continuous with the ones we’ve already seen around Big Data - where the ability not just to collect data, but to make inferences about people (whether accurate or not) is going way up. Finally, I think a sleeper issue about AI is intellectual property. There’s a lot of litigation percolating up through the courts where owners of IP are challenging the use of their material by AI systems. There’s going to be more of that, and it’s easy to see where the concern comes from, even if you ignore questions about scraping training data - if you do almost any sort of Google search now, it pops up its AI answer on top, and a lot of the time, that answer seems like a plagiarism of one of the top hits.
It’s easier to say what I think won’t work, which is industry self-regulation. I think it’s instead going to take a combination of cultural shifts and awareness about what AI is good for and what it’s not, combined with quite a bit of actual regulation to get better designed systems. The history of privacy regulation offers a lot of useful lessons here, because a lot of the regulatory strategies there haven’t worked.
I’d been working on privacy and data for a while, and so AI was sort of on my radar. One of my colleagues here at the Center, Mary Lou Maher, approached me to work as part of a team putting together a grant on trustworthy AI, and that was really when I first started working seriously on AI.
There’s a lot of work on AI. A lot of that work is really technical - and the technical work is important! But there’s also a bunch of people that think the political, ethical and legal aspects of AI need more attention than they get, and than the AI companies are giving them. So several of us here started working together to put together a research team around those areas. We picked AI literacy, equity and trustworthy AI because those were areas that are important no matter what AI tools you develop and what you do with them.
I’m trained as a philosopher - my graduate work was largely in the history of political philosophy and contemporary French theory.. But early in my career, I was teaching courses in “computer ethics,” and I pretty quickly developed interests in intellectual property and then in privacy. I also realized early on that law professors had a lot of interesting things to say on these topics that weren’t making it into non-law disciplines. So I started doing work that tried to bridge all these areas together - political theory, law and technology. I’ve been lucky to be here at Charlotte where that kind of interdisciplinary work is encouraged. What makes me unusual in this kind of context is the combination of philosophy and law.
I’ve been writing papers on AI and other forms of algorithmic governance, and trying to think about the concepts we need to understand these better. In some ways it’s pretty abstract, but it’s aimed at understanding the governance strategies that will help us make the best use of AI. I’ve also been doing some early-stage work on the ways that language models (like ChatGPT) embed social values and norms. That’s harder than it initially might seem, not just because they are incredibly complex artifacts, but also because the way they produce language isn’t at all like what people do. So you have not just questions about the way technologies embed social values, but also a non-standard set of questions about the relation between language and social values. I’ve been trying to use the disjunct between human language use and AI to get to some of those normative questions.
Pursue the interest! There’s good reason to think that, even once you get past the hype, AI is going to be really important across a range of social issues, political systems, medical research (that’s actually an area where I’m optimistic), education, etc. More people need to be thinking about this and it’s precisely because those issues are cross-cutting that the best work is interdisciplinary.
That’s really hard, so I’m going to suggest two. One was actually written a little before AI became such a news item. It’s called Automating Inequality, by Virginia Eubanks. It’s a series of case studies about what can go wrong when algorithmic governance gets embedded in the delivery of social services. It’s also sobering, because there’s problems even when the people involved are well-intentioned and careful about their work. The other is Kate Crawford’s Atlas of AI. It’s got a strong thesis - that AI industries are “extractive” (like mining, or oil), but she goes through and thinks about all the stages in AI development, from lithium mining to the training data for facial recognitions, and shows how those often go wrong, and why the history of systems like emotion recognition isn’t nearly as scientific as we’re told they are. It’s super accessible, and no matter what you think of the ultimate argument, it’s a comprehensive look at the kinds of things we ought to be concerned about when we talk about AI. They’re both serious critiques of some of the directions we’re headed in.