Mission

The rise of social media and massive information sharing online have led to a dramatic increase in the spread of both inadvertent misinformation and strategic disinformation (e.g., foreign influence operations seeking to undermine democratic nations). Additional challenges arise in helping decision-makers navigate conflicting information (e.g., information coming from different sources or evolving during a crisis, such as a national disaster or pandemic). To address this challenging information environment, our mission is to design, build, and test innovative AI technologies to support journalists, professional fact-checkers, and information analysts. Our use-inspired research to protect information integrity world-wide drives our broader work to develop responsible AI technologies that are both fair (in protecting different stakeholders who may bear disproportionately impacts) and explainable (so that stakeholders can best capitalize upon AI speed and scalability alongside their own knowledge, experience, and human ingenuity). 

The short version: We design, build, and test innovative AI technologies to protect information integrity and support fact-checking work by journalists, professional fact-checkers, and information analysts. 

Research Team

Quick Links: Publications Official UT Project Page  

Core Faculty


Affiliated Faculty

Affiliated UT Centers & Initiatives

Center for Media Engagement

Computational Media Lab

Global Disinformation Lab

Good Systems

Machine Learning Lab


Students

Alex Boltz (Department of Government)

Anubrata Das (School of Information)

Jifan Chen (Computer Science)

Tanya Goyal (Computer Science)

Venkata S. Govindarajan (Linguistics)

Kami Vinton (School of Journalism and Media)

Houjiang Liu (School of Information)

Terrence Neumann (Information, Risk and Operations Management)

Li Shi (School of Information)


Alumni

Chenyan Jia (now Northeastern University)

Venelin Kovatchev (now University of Birmingham)