In a recent study published in iScience, C-BEAM researchers Alicia von Schenk and Victor Klockmann explore how artificial intelligence can support lie detection—a task where humans typically perform no better than chance. Traditional tools like polygraphs remain unreliable; AI offers a scalable alternative, but with significant trade-offs.
To build their dataset, the researchers asked nearly 1,000 participants to write down their weekend plans, one true statement and one fabricated. Participants were incentivized to make their lies as convincing as possible. After quality control, the final dataset consisted of 1,536 statements from 768 authors, 50% of which were false and 50% true. Based on this, the team developed a lie detection algorithm using Google’s open-source language model BERT. The trained model successfully identified 81% of the false statements.
The main experiment tested how people interact with such a system. A total of 2,040 participants were shown 510 randomly selected statements and asked to judge whether each was true or false. Only a third opted to use the AI. Yet those who did followed its advice in 88% of cases and were far more likely to accuse others of lying—raising the accusation rate from 19% to 58%. As the authors note, “When people actively choose to rely on AI, they tend to follow it almost blindly.” While the tool improves detection, it also raises ethical concerns: increased false accusations, eroded trust, and the risks of misuse in sensitive areas like hiring or content moderation.
Read the full study: "Lie detection algorithms disrupt the social dynamics of accusation behavior."
News report on MIT Technology Review