"Privacy doesn’t end at the glass door or the app’s login screen; it follows the person. In an age when AI models can listen without touching, watch without entering, and infer without asking, the Fourth Amendment’s demand still stands: show the cause, name the target, and let a judge see the method. Black-box guesses are not reasons; they are claims that must be tested in daylight.."
— Aditya Mohan, Founder, CEO & Philosopher-Scientist, Robometrics® Machines
Neon buzzed over a glass phone booth wedged between a newsstand and a chrome‑edged diner that perfumed the sidewalk with coffee and fried onions. A man in a dark overcoat stepped in from the night, dropped a dime, and tugged the folding door until its rubber seal met the jamb with a soft click. Inside, the booth made a little world: fluorescent hum, the scent of nickels and ozone, the handset’s black enamel warm in his palm, breath fogging the glass. He cupped the receiver, turned slightly toward the corner, and spoke low. Across the curb, agents in an unmarked sedan tracked the cadence more than the words; a tiny microphone affixed to the booth’s exterior relayed a signal to a tape machine that whirred softly in the dark. No lock was picked; no wire was spliced. Yet when the reels clicked to a stop, a larger question began: if the government never crosses a physical threshold, can it still invade a person’s privacy?
Sunset Boulevard, Los Angeles (Feb. 19–25, 1965)
Exact location: 8210 Sunset Boulevard (8200 block), Hollywood.
Agents: FBI, observing from an unmarked car.
Method: A microphone affixed to the outside of a public phone booth transmitted audio to a portable recorder—no physical entry, no splice into the phone line.
Legal significance: These recordings became the centerpiece of Katz v. United States (1967), where the Supreme Court held that the Fourth Amendment protects people, not places, and Justice Harlan articulated the “reasonable expectation of privacy” test.
Two years later, the answer arrived. In Katz v. United States (1967), the Court rejected the cramped idea that privacy ends at property lines. Justice Stewart set the pivot in a sentence—“the Fourth Amendment protects people, not places”—and held that recording Katz’s words from outside the booth was a search. Justice Harlan’s concurrence supplied the enduring two‑part test: (1) did the person exhibit a subjective expectation of privacy; and (2) is that expectation one society is prepared to recognize as reasonable. By closing the door and paying the toll, Katz signaled an expectation that the content of his call would remain private; society, the Court said, should honor that. Trespass was no longer the sole gatekeeper to the Fourth Amendment, even as later cases (e.g., United States v. Jones (2012)) revived property concepts alongside Katz. The conviction could not stand; the recordings required a warrant grounded in particularized cause.
Katz is the hinge that lets privacy ride with the person into new technologies. It anchors decisions on phones (Riley v. California (2014)), historical location trails (Carpenter v. United States (2018)), thermal imaging (Kyllo v. United States (2001)), and long‑term tracking (United States v. Jones (2012)); it also frames debates over geofence and reverse‑keyword warrants. The common thread is not the gadget but the exposure—how much of intimate life the state can assemble without a judge’s leave. The “reasonable expectation” inquiry now lives alongside limits on the third‑party doctrine, as courts recognize that modern life forces disclosures to networks we cannot meaningfully avoid. If the Amendment follows people, not places, it must also follow the trails they shed—packets, pings, and patterns—demanding warrants or narrowly tailored exceptions and robust minimization when government seeks to collect, correlate, or infer.
AI changes both the scale and the subtlety of listening. Pattern‑matching AI models can reconstruct speech from minute vibrations in a chip bag across the room, re‑identify people across camera networks by gait or posture, infer identity from keystroke cadence, cluster “unusual activity” from hours of feeds, mine ambient audio for mood, and fuse these streams into profiles—without touching a doorknob. Foundation and multimodal AI models add another layer, summarizing text, voice, and video to generate behavioral inferences at population scale.
Under Katz, when the government deploys an AI model to capture or infer content a person has reasonably walled off, it conducts a search. That requires judicial oversight; particularity (targets, time windows, locations, and data types); and minimization—plus auditable logs of what was collected, which AI‑model version was used, the thresholds applied, and how outputs were filtered. Because AI models can err, drift, and encode bias, due process also demands explainability sufficient for challenge: documented training‑data provenance and jurisdiction, pre‑processing steps, validation methods, class‑by‑class error rates, calibration reports, known failure modes, and drift monitoring, together with records that let a court trace the path from input to inference. Where trade secrets are claimed, courts can order supervised expert access or neutral audits under protective orders.
Two further guardrails follow from Katz: first, warrants (or narrowly tailored exceptions) should describe not only the place and things to be searched but also the AI‑model class, data sources to be queried, model thresholds, and retention limits—so generalized dragnets do not masquerade as “analytics.” Second, AI‑model outputs are leads, not verdicts; human adjudicators must corroborate and weigh them against contrary evidence.
Black‑box AI‑model outputs are not reasons—they are claims that must be tested. The phone booth became cyberspace, and the test Harlan sketched must now reach the AI model as surely as it once reached the microphone.