XX
— Aditya Mohan, Founder, CEO & Philosopher-Scientist, Robometrics® Machines
How a pre‑dawn thermal scan reshaped the Fourth Amendment for the age of remote sensing and AI.
Frost feathered the triplex eaves and turned the cul‑de‑sac into a breath‑white map. From the passenger seat of Agent William Elliott’s unmarked car, Oregon National Guard Sergeant Dan Haas raised an Agema Thermovision 210 and steadied it on the window frame. At about 3:20 a.m., the scope painted the houses in gradients: the neighbor’s unit blue‑cool; the middle roof shedding pale heat; the garage roof and side wall of Danny Kyllo’s unit glowing hot. The pattern matched what Haas had seen around high‑intensity grow lamps—plumes rising where insulation should have kept secrets. Data points stacked up: unusual power bills, a faint nocturnal hum, the hot wall on the screen. By breakfast, the scan was in an affidavit; a warrant followed; later, a search. The device never crossed the threshold, but it reported what lived behind it.
In Kyllo v. United States, 533 U.S. 27 (2001), the Supreme Court drew a bright line at the home’s entrance. Writing for the Court, Justice Antonin Scalia held that “obtaining by sense‑enhancing technology any information regarding the interior of the home … constitutes a search—at least where the technology is not in general public use.” Building on Payton v. New York (1980), the Court reaffirmed the long‑standing principle that the Fourth Amendment draws a firm line at the entrance to the house. If the government wants to learn inside facts of a home by pointing machines at it from the street, that is a search—bring a warrant.
Kyllo updates the Fourth Amendment for the era of invisible eyes. Thermal imagers, RF/Wi‑Fi channel‑state information (CSI) sensing, mmWave radar, UWB (ultra‑wideband), LiDAR backscatter, and smart‑meter analytics can all suggest interior details without breaking a lock. Kyllo says the home is different: remote probing from outside still triggers the Constitution. For investigators, the practical playbook is narrow exigency, particularized warrants, and technical minimization that confines any sensor sweep to what the judge authorized. For courts, “general public use” is a caution light—not a loophole. Cheapness or consumerization does not erase the sanctity of the threshold.
Not every risk in the AI era is a fancy sensor. A closer‑in threat is questioning the user’s own AI—personalized assistants with memory recall, chat histories, and local indexes of photos, notes, and smart‑home logs. Under Kyllo’s logic, if those AI‑assisted answers reveal facts about the interior of the home and your personal life—what you did, where you were, and even your likes and dislikes—that were not exposed to public view, obtaining them without consent or a warrant is a search all the same.
What this looks like
Memory recall: A home assistant (on‑device or account‑linked) is asked: “What’s in the garage?” “How many grow lights were running last week?” “Who slept in the guest room?” If the assistant answers from private, in‑home data (camera thumbnails, device logs, notes), Kyllo’s warrant wall applies.
Personalized model Q&A (OpenAI or other vendors): Investigators querying a user‑tuned model via the user’s account or device should treat the model’s memory stores, chat logs, retrieval indexes, and embeddings as places to be searched—named in the warrant with date ranges, topics, and sources.
On‑device vs. cloud copies: On‑device memories are strongly tied to the home and its effects—bring a warrant. Cloud backups or vendor‑hosted mirrors may also require compelled process; particularity still governs to avoid bulk disclosure.
Question‑as‑search: Natural‑language prompts do not shrink the Constitution. If the question elicits interior facts, it must fit within the authorized scope just like keyword searches in a file system.
Practical guardrails
Particularity: Identify the models, memory stores, chat logs, and vector indexes to be queried; specify allowed topics/time windows.
Minimization: Disable general memory browsing; confine queries to approved sources; log the exact prompts/responses.
Provenance: For any answer used as evidence, tie the model’s output back to the underlying documents or logs; preserve originals and record which memory item the model cited.
Reliability: Treat model summaries as investigative leads unless corroborated; avoid charging decisions based solely on unattributed AI recall.
Consent clarity: Voluntary user consent can authorize a targeted query; absent that, get a warrant.
Yes—if the underlying collection was lawful and the method is reliable. Courts separate two questions:
Was there a search? If sense‑enhancing tech (thermal, RF, mmWave, UWB, LiDAR) revealed interior facts without a warrant and no exigency, Kyllo points to suppression as fruit of the unlawful search.
Is the output admissible? Even with a valid warrant, AI‑generated or AI‑assisted evidence must satisfy Rule 702(expert reliability/Daubert), Rule 901 (authentication), and Rule 403 (probative value vs. unfair prejudice).
Enhancement (contrast, denoise, deblur, temperature‑to‑color mapping) clarifies lawfully captured data. Admissibility hinges on reproducibility, documented parameters, and expert explanation.
Generation (inpainting missing regions, super‑resolution that hallucinates textures, synthetic audio “reconstruction”) risks adding content that never existed. Use for lead‑finding may be acceptable; as trial evidenceit triggers 702/403 concerns and should be clearly labeled, with originals preserved and shown first.
Inference at scale (e.g., a model outputting “grow‑op likelihood: 0.82”) is opinion evidence; tie it to transparent features, confidence intervals, and validation in comparable environments.
Probable cause: Courts allow a practical, commonsense totality. An AI score can be one factor if inputs were lawfully obtained (e.g., thermal with a warrant; utility records via subpoena) and the model’s rationale is documented.
Trial proof: Reliability demands more—disclose training data sources, sensor specs, versioned model weights, error rates, and standard operating procedures; maintain full chain‑of‑custody for raw signals and intermediate files.
Kyllo referenced technology “not in general public use.” Even where a sensor becomes common (cheap thermal cameras, smart meters), the safer reading in home‑privacy cases is function over price: if a tool reveals intimate details of the interior otherwise unknowable without entry, treat it as a search and bring a warrant. Cheap does not mean consequence‑free.
Warrant particularity: Specify sensors, vantage points, time windows, areas of the structure, and the exact analyses permitted (e.g., thermal plume mapping only; no RF occupancy inference).
Minimization: Pre‑filter to the authorized area; avoid full‑structure sweeps; sandbox models to approved features.
Documentation: Version‑lock models; record prompts/parameters; export immutable logs of every run; preserve raw and processed outputs.
Transparency to the court: Provide protocols and validation studies; state error rates and known failure modes (insulation variance, weather, adjacent heat sources).
No dragnets: Mass neighborhood thermal or RF sweeps without individualized suspicion risk general‑warrant problems—even if each single pass seems “non‑intrusive.”
Labeling in the record: Mark exhibits as Enhanced or Synthetic/Illustrative; ensure the jury sees originals and understands what the model added or inferred.
Use limits: On‑scene safety uses (e.g., checking for occupants in a fire) do not license evidentiary AI searches; once the emergency ends, switch to a warrant workflow.
AI‑assisted sensing does not create a shortcut around the home. If a model helps interpret lawfully collected signals within a properly scoped warrant, courts may admit the results with strict reliability and authenticity guardrails. If AI is used to extract interior facts without a warrant, Kyllo makes the path clear: that is a search, and the remedy is suppression.