“Don’t write laws for the machine. Write laws for what happens to people when the machine is used.\nThe tool will evolve; the standard of care must not.”
— Aditya Mohan, Founder, CEO & Philosopher-Scientist, Robometrics® Machines
There’s a quiet illusion we keep falling for in every technological era: the belief that safety lives inside the mechanism. We point at the new thing—the steam engine, the telegraph, the radio, the reactor core, the neural network—and we try to legislate the gears. But harms rarely come from gears. They come from incentives, intent, negligence, and the predictable ways people route power through whatever instruments are available. The law, at its best, has always known this. Tort doctrine asks whether a duty was breached and whether that breach caused foreseeable injury. Fraud doctrine asks whether a person misrepresented facts and induced reliance. Privacy doctrine asks whether the state overreached, regardless of whether the intrusion came by lantern, wiretap, thermal sensor, or cell-site trail. When Oliver Wendell Holmes wrote that the life of the law is experience, he was pointing at an enduring method: judge the human consequence, not the novelty of the tool.
Now place that method into an AI-saturated world. In a decade, “AI” won’t feel like a separate category any more than “electricity-powered.” It will be woven into cameras, payroll systems, airplanes, medical intake forms, creative software, customer service, and the household objects that already listen and learn. If regulation tries to enumerate every model type, training method, architecture, and product surface area, it will collapse under its own taxonomic ambition. That is the Law of the Horse problem in modern clothing: Frank Easterbrook’s caution, later debated by Lawrence Lessig, that carving the law into tiny technology-specific kingdoms risks shallow rules that miss the general principles that actually govern real disputes. The smarter move is to keep the law’s center of gravity on the outcome: discrimination, deception, unsafe design, defamation, privacy invasion, coercion, and nonconsensual exploitation. Let the tool change; keep the test stable.
One Clinic for Every Machine, One Law for Every Harm
On a wet evening corner, the Uptime Clinic glows like a small oath against chaos: one weathered industrial robot, calm as an old judge, guides a kid’s hands as they tighten a bolt on a trembling robotic dog, while a humanoid helper waits with a scuffed elbow, a house bot’s cracked sensor dome blinks patiently, a delivery rover lists slightly from a bent wheel strut, a robot cat flexes an exposed actuator paw, and a horse-shaped machine holds its stiff rear joint like an injured runner—different bodies, same vulnerability, same promise of care; and that is the point the “law of the horse” tries to rescue from every new wave of novelty, because a city could teach an entire course on horses—sales of horses, injuries by horses, licensing and racing—and still miss the unifying principles that actually govern responsibility, so the wiser move is to learn the deeper rules first and apply them everywhere, the way this clinic does without asking what kind of robot you are, only whether you can be made safe again, whether you can be trusted again, whether consent and truth are restored before you reenter the street; the traffic blurs past like the hype cycle, the billboards flicker like new jargon, but under the awning the test stays stable: not the tool, not the shape, not the label—just the outcome, the duty of care, and the quiet insistence that whatever intelligence we build, we remain accountable for what it does to living lives.
As Judge Easterbrook argued in his 1996 remarks at the University of Chicago’s “Law of Cyberspace” gathering—later published the same year as “Cyberspace and the Law of the Horse” in the University of Chicago Legal Forum—the point was never that horses are trivial. The point was that a tool-centered silo teaches the wrong lesson. His key sentence is plain and sharp: “Any effort to collect these strands into a course on ‘The Law of the Horse’ is doomed to be shallow and to miss unifying principles.” In other words, you don’t learn torts by memorizing every horse-kick case; you learn torts, and then you can understand the horse-kick cases.
People later paraphrased his stance as “there is no law of the horse,” sometimes loosely rendered as “law of horses,” a shorthand that shows up prominently in Lawrence Lessig’s 1999 response essay, which recalled Easterbrook’s provocation to a room full of cyberlaw enthusiasts. Lessig disagreed with Easterbrook’s conclusion, but even in disagreement he captured the frame: the medium is not the doctrine. If you want durable governance, start with the deep machinery—property, contract, tort, procedure, constitutional limits—and only then ask how the new environment stresses those doctrines. Easterbrook’s deeper claim was methodological humility. He warned that what lawyers “know” about fast-mutating technologies ages badly, and that tailoring bespoke rules to a moving target can weaken the analysis by isolating it from the broader body of law. His prescription was almost stubbornly classical: develop sound general law first, then apply it to the new setting. When he uses intellectual property as his illustration, the message is clear: you can’t responsibly optimize the rules for networks until you’ve clarified what you actually believe the underlying rights and remedies should be. That historical caution matters because AI is not one thing. It is a moving bundle of techniques deployed across privacy, intellectual property, liability, discrimination, consumer protection, and criminal misuse. A specialized Law of AI risks fragmentation: splintered concepts and inconsistent definitions that make it harder to reason across cases where the same human harm appears in different costumes. It also risks losing unifying principles. The best legal ideas—duty, causation, intent, reasonable reliance, consent, foreseeability, proportionality—are portable. They are what let courts compare a new fact pattern to an older one without pretending the new tool changes the moral structure of the dispute.
Finally, a siloed Law of AI can reduce adaptability. Common law evolves by deciding concrete disputes, then refining standards as edge cases appear. That is how “reasonable care” learns, how product safety expectations rise, and how responsibility attaches to those best positioned to prevent harm. If we freeze AI into specialized statutes that chase specific architectures or model families, we lock ourselves into yesterday’s vocabulary and invite loopholes tomorrow. The better approach is to keep AI inside the ordinary pathways of accountability—then strengthen the procedural muscle around it: clearer disclosure when synthetic media is used, enforceable consent boundaries for identity and likeness, audit trails for high-impact decisions, and meaningful remedies when systems predictably injure people. The difference becomes vivid when you consider dual-use reality. Nuclear fission can light cities or flatten them; the ethical variable is not the physics but the deployment. AI can generate synthetic media that restores a voice lost to illness or lets a person build a faithful digital twin for consent-based continuity of self. The same capability can also be used for impersonation, extortion, and nonconsensual sexual imagery. A tool-based ban cannot see that moral split; it only sees “the algorithm.” Outcome-based rules see the breach. They ask: was there consent, was there deception, was there an unreasonable risk, was there inducement, was there foreseeable harm? This is why the most durable tech cases often turn on behavior and effect rather than the gadget itself—why the Sony Betamax doctrine cared about substantial non-infringing use, and why Grokster focused on inducement: not the existence of copying software, but the promotion and steering of infringement.
A principles-based approach is also a humility practice for smart minds. Isaac Asimov warned that knowledge can outpace wisdom; law exists, in part, to close that gap with accountability that survives the next wave of innovation. Louis Brandeis’s sunlight metaphor reminds us that transparency is not a feature request but a safety constraint: disclosure, auditability, and explainable responsibility reduce the space where harm hides. And George Bernard Shaw’s line about progress depending on the unreasonable person carries a useful warning for governance: we want creativity and daring, but we should demand that audacity be paired with responsibility for consequences. If AI is everywhere, the law cannot chase it everywhere. It can, however, insist—everywhere—that the people who build and deploy these systems are answerable for what their systems do to human lives.
From Infinite Improbability to Generative AI: Navigating Imagination in Fiction and Technology
Human vs. AI in Reinforcement Learning through Human Feedback
Generative AI for Law: The Agile Legal Business Model for Law Firms
Generative AI for Law: From Harvard Law School to the Modern JD
Unjust Law is Itself a Species of Violence: Oversight vs. Regulating AI
Generative AI for Law: Technological Competence of a Judge & Prosecutor
Law is Not Logic: The Exponential Dilemma in Generative AI Governance
Generative AI & Law: I Am an American Day in Central Park, 1944
Generative AI & Law: Title 35 in 2024++ with Non-human Inventors
Generative AI & Law: Similarity Between AI and Mice as a Means to Invent
Generative AI & Law: The Evolving Role of Judges in the Federal Judiciary in the Age of AI
Embedding Cultural Value of a Society into Large Language Models (LLMs)
Lessons in Leadership: The Fall of the Roman Republic and the Rise of Julius Caesar
Justice Sotomayor on Consequence of a Procedure or Substance
From France to the EU: A Test-and-Expand Approach to EU AI Regulation
Beyond Human: Envisioning Unique Forms of Consciousness in AI
Protoconsciousness in AGI: Pathways to Artificial Consciousness
Artificial Consciousness as a Way to Mitigate AI Existential Risk
Human Memory & LLM Efficiency: Optimized Learning through Temporal Memory
Adaptive Minds and Efficient Machines: Brain vs. Transformer Attention Systems
Self-aware LLMs Inspired by Metacognition as a Step Towards AGI
The Balance of Laws with Considerations of Fairness, Equity, and Ethics
AI Recommender Systems and First-Party vs. Third-Party Speech
Building Products that Survive the Times at Robometrics® Machines
Autoregressive LLMs and the Limits of the Law of Accelerated Returns
The Power of Branding and Perception: McDonald’s as a Case Study
Monopoly of Minds: Ensnared in the AI Company's Dystopian Web
Generative Native World: Digital Data as the New Ankle Monitor
The Secret Norden Bombsight in a B-17 and Product Design Lessons
Kodak's Missed Opportunity and the Power of Long-Term Vision
The Role of Regulatory Enforcement in the Growth of Social Media Companies
Embodied Constraints, Synthetic Minds & Artificial Consciousness
Tuning Hyperparameters for Thoughtfulness and Reasoning in an AI model
TikTok as a National Security Case - Data Wars in the Generative Native World