"Regulate harms, not ideas. Aim the law at fraud and coercion—and leave room for humans and machines to speak."
— Aditya Mohan, Founder, CEO & Philosopher-Scientist, Robometrics® Machines
The modem’s hiss braided with street noise outside Critical Path AIDS Project’s storefront office in Philadelphia. On the screen sat pages about safer sex, HIV transmission, and treatment—plain words and medical links that teenagers sometimes wrote in to thank them for finding. On February 8, 1996, news broke that the Communications Decency Act had been signed. Kiyoshi Kuromiya printed the statute and circled the words “indecent” and “patently offensive.” A volunteer asked the question no one in the room could answer: Does this make our health pages a crime if a 17-year-old reads them? Credit-card gates and adult ID numbers—suggested “fixes” in the law—made no sense for a nonprofit clinic site. For an hour they debated whether to pull the very sections most often visited by teens. Instead, Kuromiya picked up the phone and called civil-liberties lawyers. If the law’s chill meant young people lost access to life-saving information, he said, he would stand as a plaintiff.
Across the country in Los Angeles, Patricia Nell Warren at Wildcat Press faced the same knot. Excerpts from gay and lesbian literature, links to community resources, and author forums—ordinary publishing work—suddenly looked risky under a statute that punished speech merely because a minor might see it online. She drafted an affidavit and began gathering exhibits: pages, timestamps, readership notes. Before any injunctions, before any arguments, the harm had already started—publishers weighing deletions, health workers considering locks on their doors. By nightfall, phones and faxes tied together a coalition—librarians, writers, educators, rights groups—moving toward the only remedy left: a constitutional challenge.
Philadelphia, February 8, 1996. When the Communications Decency Act was signed, Kiyoshi Kuromiya at the Critical Path AIDS Project circled §223’s words—“indecent” and “patently offensive”—and asked if their HIV safer-sex pages became a crime for teens to read. He called civil-liberties lawyers. In 1997, Reno v. ACLU struck those §223 restrictions, keeping lifesaving health information online.
Congress had enacted the CDA on February 8, 1996 (Telecommunications Act, Title V). A special three‑judge district court in Philadelphia—Chief Judge Dolores K. Sloviter, Judge Stewart Dalzell, and Judge Ronald L. Buckwalter—held a five‑day evidentiary hearing in March–April 1996 and issued a preliminary injunction on June 11, 1996, blocking the statute’s “indecent transmission” and “patently offensive display” crimes while preserving obscenity and child‑pornography prosecutions. The case leapt to the Supreme Court under the CDA’s fast‑track review. On March 19, 1997, the Justices heard argument; on June 26, 1997, in Reno v. ACLU, the Court struck down the core CDA crimes 7–2. Writing for the Court, Justice John Paul Stevens set the baseline that would govern the next three decades: the Internet, “unlike radio, receives full First Amendment protection,” the law placed an “unacceptably heavy burden”on protected speech, and the risk of criminal sanctions hung “like the proverbial sword of Damocles” over every online speaker. Justice Sandra Day O’Connor, joined by Chief Justice William Rehnquist, concurred in part and dissented in part, sketching a zoning‑style approach to create online “adult zones,” but agreeing the CDA swept too broadly.
Reno confirmed that full First Amendment protections apply online. It rejected attempts to graft broadcast‑style indecency rules onto the Internet and insisted on narrow tailoring and less‑restrictive alternatives (age‑gating, user controls, labeling) when government seeks to regulate lawful speech to protect minors. The decision also showed how American law adapts to new code with old text: courts can read the First Amendment in light of a new medium without inventing a new constitutional regime. After Reno, Congress tried again with the Child Online Protection Act (1998); the Supreme Court kept COPA enjoined in 2002 and 2004. By contrast, the Children’s Internet Protection Act (2000)survived in 2003 for public libraries, framed as a funding condition with opt‑out filtering. Crucially, Reno invalidated only the CDA’s indecency crimes—Section 230’s separate immunity remained intact—a structural reason the open web (and later platforms) could scale.
Reno’s rule is the foundation for generative‑AI speech and hosting: when the state regulates or pressures platforms to regulate lawful expression, it must be narrow, precise, and evidence‑based. That baseline applies to all modalities, not just text.
What counts as speech now
Text: large language models drafting articles, code, or political persuasion.
Images: diffusion/transformer models producing photorealistic or stylized pictures (art, satire, reportage‑like composites).
Audio/voice: TTS and voice‑cloning systems generating narration, songs, or character dialogue.
Video: generative and editing models composing or altering moving images, from animation to synthetic news clips.
Under Reno’s approach, government cannot outlaw these media wholesale as “offensive” or “indecent”; it must target unprotected speech (obscenity, true threats, CSAM, narrowly defined incitement) or adopt less‑restrictive alternatives.
Implications for model developers and hosts
Editorial discretion is protected: private providers may set house rules; state coercion (jawboning, threats) triggers First Amendment scrutiny.
Narrow tailoring over blunt bans: youth‑safety goals are better served by age‑gating, on‑by‑default family filters, and user controls, not criminalizing broad swaths of lawful AI outputs.
Labeling/provenance as lighter touch: watermarks, content credentials, and user disclosures are less‑restrictive than prior restraints on creation.
Viewpoint neutrality: rules that suppress a viewpoint (political, artistic, social) face the steepest scrutiny when tied to state action; safety policies should be content‑agnostic and harm‑specific.
Training vs. outputs: restrictions on training data raise different questions from restrictions on outputs; Reno’s core speaks to what users can see and share, including what models write, draw, sing, and show.
Forensic, safety, and authenticity tooling
Target the unlawful, not the merely offensive: classifiers that detect CSAM, doxxing, extortion, or credible threats fit Reno’s precision demand; bulk suppression of “indecency” does not.
Appeals and audit trails: log prompts, filters triggered, moderator decisions, and model versions to prevent over‑blocking and enable accountability without criminalizing lawful expression.
Synthetic voice & deepfakes: punish fraud, impersonation, or material deception (election fraud, commercial misrepresentation) without banning the medium; labeling duties beat speech bans.
Evidence integrity: when law enforcement enhances or reconstructs audio/video with generative tools, require chain‑of‑custody, version‑locked models, and reproducible prompts so synthetic artifacts never masquerade as originals
Reno teaches a simple discipline for the AI era: target harms, not ideas. Build rules that punish fraud, coercion, and deception; use lighter‑touch tools—age gates, labels, user choice—for everything else; and keep government hands off lawful speech, whether typed, drawn, sung, or synthesized.