“Bullshit wins by arriving first, sounding certain, and asking not to be examined. The best detector is the one that can slow a claim down until its missing facts, hidden leaps, and borrowed authority become visible.”
— Aditya Mohan, Founder, CEO & Philosopher-Scientist, Robometrics® Machines
The ideal bullshit detector is not merely a better search engine, nor a louder fact-checker, nor a mechanical referee waving a red flag at obvious nonsense. It is a cognitive instrument built for a world in which falsehood is cheap, frictionless, emotionally efficient, and often socially rewarded. A good detector does more than decide whether a statement is true or false. It asks what kind of claim this is, what evidence would bear its weight, what incentives produced it, what audience it is designed to move, and how quickly it is likely to spread before anyone has time to examine it. In that sense, the ideal detector is part microscope, part courtroom, part air-traffic control system for public reason. It does not merely catch errors after impact. It tracks trajectories, predicts collisions, and separates a bad-faith performance from an honest mistake before both enter the same bloodstream.
Brandolini gave this problem its modern internet form in 2013. His law states that the energy needed to refute bullshit is an order of magnitude greater than the energy needed to produce it. The brilliance of the observation lies in its engineering realism. Falsehood is often generated as a compressed projectile. It is short, vivid, emotionally tuned, and optimized for circulation. Truth, by contrast, is usually bulky. It requires context, definitions, sequencing, caveats, data quality checks, causal discipline, and the unhappy labor of saying not only why something is wrong, but how exactly it became believable. The asymmetry is social as well as intellectual. To refute nonsense often means not merely correcting a sentence, but confronting a tribe, a status game, an identity, a party line, a business model, or the private vanity of someone who has already gone public. The ideal detector therefore cannot be built on accuracy alone. It must also be built on speed, clarity, psychological tact, and the ability to survive contact with group loyalty.
The oldest philosophical ancestor I can verify for this problem is Aristotle, writing around the fourth century BCE in On Sophistical Refutations. He does not give us Brandolini’s exact aphorism, but he does describe the ancient mechanism with remarkable precision. Some arguments, he says, only seem to be real arguments, especially to the inexperienced, who view them as though from a distance. That image is striking. Error often wins not because it is deep, but because it is seen at low resolution. Aristotle’s project was therefore not only to defend truth, but to classify the tricks by which sham reasoning impersonates the genuine article. One written example he analyzes turns on equivocation about a man named Coriscus, showing how a speaker can manufacture the appearance of contradiction by sliding between meanings. This is an early debunking manual. It treats deception not as a moral fog, but as a technical object with parts, patterns, and repeatable failure modes. More than two millennia later, Jonathan Swift would compress the same insight into a more famous line when he wrote in 1710 that falsehood flies while truth comes limping after it. Aristotle gave us the anatomy; Swift gave us the velocity.
In law, Aristotle’s anti-sophistical manual survives in operational form. A careful pleader first separates factual allegations from labels and conclusions, because a polished conclusion can look like proof even when the underlying facts are thin. At deposition and at trial, the same discipline appears in examination. Counsel uses short, controlled questions to lock down one proposition at a time, fixes the meaning of key words so a witness cannot quietly shift definitions, breaks a broad claim into smaller parts, and then compares each part against documents, prior testimony, and known facts. With an expert witness, the inquiry becomes even sharper. The issue is not whether the expert sounds impressive. The issue is whether the opinion can survive step-by-step inspection. What facts did the expert rely on. Are those facts complete or selective. What method was used. Is that method accepted, testable, and applied correctly here. Did the expert move from data to conclusion through a visible chain of reasoning, or did the expert simply leap from credentials to assertion. A strong cross-examination tries to expose that gap. It turns a grand opinion into a sequence of checkable steps and asks the court to look at each one in daylight. A vivid recent example came in Dominion Voting Systems v. Fox News. In 2023, after a record built from depositions, internal messages, and public broadcasts, the Delaware Superior Court wrote that it was “CRYSTAL clear that none of the Statements relating to Dominion about the 2020 election are true.” The larger lesson is both forensic and ancient: the most effective debunking in law rarely begins with outrage. It begins by forcing a claim to sit still long enough for its contradictions, missing premises, and hidden leaps to become visible.
By 2026 the large language model has become both the finest public debunker ever built and one of the easiest engines for industrial-scale confusion. On its good days, an LLM (Large Language Model) can absorb a claim, restate it clearly, identify its hidden assumptions, retrieve relevant evidence, compare competing sources, explain degrees of confidence, translate technical material into ordinary language, and do so in seconds for a villager, a student, a mayor, or a nurse almost anywhere with a network connection. This matters because the historical weakness of truth has often been logistical rather than philosophical. Expert knowledge existed, but it was expensive, slow, geographically uneven, and trapped inside institutions. The LLM changes that. Yet the same system can also produce misinformation with frightening efficiency. It can generate polished false narratives, counterfeit expertise, fake citations, tailored emotional framing, and endless variations of the same lie adapted to different subcultures. It lowers the cost of both illumination and deception. In information warfare terms, it shortens the distance between invention and mass distribution. The detector and the fabricator now share the same chassis.
That is why, for 2026 and 2027, the best available path to truth is not “human experts first” in the old artisanal sense. Purely human expertise is too scarce, too expensive, too slow, and too unevenly distributed to serve as the primary defense for a planet flooded with synthetic claims. The leading candidate is an LLM functioning as an expert interface, but only under strict design discipline. It should begin from a reputable and responsible frontier-model provider, be grounded in live retrieval from named sources, display citations, separate evidence from inference, state uncertainty plainly, expose source conflicts, resist prompt-driven flattery, and preserve an auditable chain showing how it reached its answer. In second place comes structured crowd judgment such as Community Notes, public annotation, reputation-weighted review, and well-designed up-vote and down-vote systems. These are valuable because they distribute scrutiny and can throttle virality, but they are slower, less coherent, and more vulnerable to apathy, factional gaming, and timing problems. Human experts remain indispensable, but chiefly as the appellate court, the training source, and the gold-standard layer for high-stakes domains. They cannot be the front line for every claim any more than a society could assign a Supreme Court hearing to every rumor.
Beyond 2027, the best bullshit detector may be something better than an LLM expert alone. The next step is a verifiable public reasoning system in which claims must carry machine-readable proof of origin, evidence lineage, and challenge history. Imagine a claim entering the world with attached provenance: who made it, what source documents it rests on, whether those documents are authentic, whether the images or audio are cryptographically signed, what qualified systems have already tested it, which counterarguments were raised, and what survived adversarial review.
The LLM in that world is no longer the sovereign judge. It becomes the interpreter of a larger truth infrastructure, one that combines provenance, cryptographic authenticity, sensor-backed records, expert review, and public contestability. That would be better than today’s model-only approach because it would move us from eloquent judgment to inspectable reality-tracking. The ideal bullshit detector, then, is not a chatbot with good manners. It is a civilization-scale instrument panel for epistemic flight, where every serious claim arrives not merely with confidence, but with a black box.
From Infinite Improbability to Generative AI: Navigating Imagination in Fiction and Technology
Human vs. AI in Reinforcement Learning through Human Feedback
Generative AI for Law: The Agile Legal Business Model for Law Firms
Generative AI for Law: From Harvard Law School to the Modern JD
Unjust Law is Itself a Species of Violence: Oversight vs. Regulating AI
Generative AI for Law: Technological Competence of a Judge & Prosecutor
Law is Not Logic: The Exponential Dilemma in Generative AI Governance
Generative AI & Law: I Am an American Day in Central Park, 1944
Generative AI & Law: Title 35 in 2024++ with Non-human Inventors
Generative AI & Law: Similarity Between AI and Mice as a Means to Invent
Generative AI & Law: The Evolving Role of Judges in the Federal Judiciary in the Age of AI
Embedding Cultural Value of a Society into Large Language Models (LLMs)
Lessons in Leadership: The Fall of the Roman Republic and the Rise of Julius Caesar
Justice Sotomayor on Consequence of a Procedure or Substance
From France to the EU: A Test-and-Expand Approach to EU AI Regulation
Beyond Human: Envisioning Unique Forms of Consciousness in AI
Protoconsciousness in AGI: Pathways to Artificial Consciousness
Artificial Consciousness as a Way to Mitigate AI Existential Risk
Human Memory & LLM Efficiency: Optimized Learning through Temporal Memory
Adaptive Minds and Efficient Machines: Brain vs. Transformer Attention Systems
Self-aware LLMs Inspired by Metacognition as a Step Towards AGI
The Balance of Laws with Considerations of Fairness, Equity, and Ethics
AI Recommender Systems and First-Party vs. Third-Party Speech
Building Products that Survive the Times at Robometrics® Machines
Autoregressive LLMs and the Limits of the Law of Accelerated Returns
The Power of Branding and Perception: McDonald’s as a Case Study
Monopoly of Minds: Ensnared in the AI Company's Dystopian Web
Generative Native World: Digital Data as the New Ankle Monitor
The Secret Norden Bombsight in a B-17 and Product Design Lessons
Kodak's Missed Opportunity and the Power of Long-Term Vision
The Role of Regulatory Enforcement in the Growth of Social Media Companies
Embodied Constraints, Synthetic Minds & Artificial Consciousness
Tuning Hyperparameters for Thoughtfulness and Reasoning in an AI model
TikTok as a National Security Case - Data Wars in the Generative Native World