“Reason explains the world once it is visible. Feeling often finds it while it is still in the dark.”
— Aditya Mohan, Founder, CEO & Philosopher-Scientist, Robometrics® Machines
Human beings do not encounter reality first as philosophers. We encounter it as living creatures under pressure. A sound in the dark, a face across the room, a silence that feels longer than it should, a landscape that invites or warns, a person who appears safe before we can prove it, an idea that feels alive before we can defend it—these are not ornamental features of thought. They are among its oldest doors. This is why feeling should not be dismissed as the softer or weaker half of intelligence. Feeling is often the fast internal registration of significance before formal explanation is ready. Reasoning may later name the pattern, test it, refine it, or reject it; but very often feeling is what first tells us that there is a pattern worth examining at all. If the question is how human beings come to understand the world, then the answer is not through logic alone and not through instinct alone, but through a dynamic exchange between the two.
Our feelings may seem inconvenient precisely because they interrupt the smooth vanity of conscious thought. The reasoning mind likes to imagine that it proceeds in neat lines from premise to conclusion. Feeling ruins that performance. It introduces hesitation where confidence was expected. It inserts attraction where there is not yet a theory, caution where there is not yet proof, and resistance where the visible facts still seem incomplete. This can slow progress in the short run because it forces us to ask more difficult questions. Why did I distrust this before I could explain why? Why did this new direction seem right before the data was mature? Why does the body register danger, mismatch, or promise ahead of the verbal mind? Yet this slowing is often not a defect. It is the cost of consulting a deeper layer of cognition. Much of what the brain does is not available in sentence form. It is integrating memory, bodily state, timing, pattern completion, anomaly detection, social inference, and environmental cues below the threshold of speech. What arrives in consciousness as a hunch may be the compressed output of a very large amount of hidden processing. That is why feeling can be understood not merely as emotion in the narrow dramatic sense, but as part of the operating system of the brain. At a deeper level, feeling is tied to instinct, value, salience, and subconscious prediction. It marks what matters before explicit theory arrives. Modern neuroscience and cognitive science increasingly suggest that perception itself is not passive reception but active prediction: the brain is constantly generating expectations, comparing them against incoming signals, and updating its model of the world. Feeling may be thought of as part of the experiential surface of that predictive labor. A gut feeling is not automatically magical, and it is certainly not always correct. But neither is it random. It may be the body’s rapid display of an inference that conscious reasoning has not yet unpacked. In survival terms, this mattered long before humanity had laboratories, proofs, or instruments. Feeling helped organisms decide quickly under uncertainty. It was not added after intelligence. It was one of intelligence’s earliest practical forms.
History shows the same pattern at civilizational scale. Consider the famous story of how human beings came to understand the shape of the Earth. The childish version says that people once believed the Earth was flat and then enlightened reason finally announced that it was round. The real story is subtler and far more revealing. At ordinary human scale, the Earth feels flat. The ground beneath our feet seems level; the horizon looks like a boundary; nothing in daily motion immediately reveals planetary curvature. That first impression was not foolish. It was the natural report of limited local perception. But disciplined observation began to press against it. By the 4th century BCE, Aristotle was already arguing from evidence rather than myth. In On the Heavens, he writes, “The evidence of the senses further corroborates this. How else would eclipses of the moon show segments shaped as we see them?” He was pointing to something decisive: during a lunar eclipse, the Earth’s shadow appears curved. He also noted that the visible stars shift as one travels north or south, which would be expected on a spherical Earth. This is the important intellectual move. Human beings did not simply replace one opinion with another. They moved from immediate feeling about the local world to a larger model forced upon them by repeated observation.
Then the tools improved, and the model deepened. Around 240 BCE, Eratosthenes used differing shadow angles at Syene and Alexandria to estimate the Earth’s circumference. That moment matters because it marks the transition from qualitative inference to measured geometry. It is one thing to suspect the world is globe-like; it is another to calculate its size. Centuries later, Christopher Columbus did not sail west in 1492 to prove that the Earth was round. Educated Europeans already inherited a long tradition of spherical Earth thinking. The deeper issue was distance, geography, and whether Asia could be reached westward by sea on a practical scale. Columbus himself, writing in his narrative of the third voyage in 1498, said: “I have always read that the world comprising the land and the water was spherical.” Even more interestingly, he then departed from that inherited model and proposed that the world was not perfectly round in the way he had been taught, but more pear-shaped, based on what he thought he observed. He was wrong in that revision, but that error is illuminating. It shows how discovery works in real time. New evidence does not always produce correct conclusions immediately. It often produces transitional models—part insight, part mistake—until better tools, measurements, and wider data correct them. Magellan’s voyage belongs to the same long drama. The famous line often attributed to him about the Church and a flat Earth is almost certainly not reliable history, and it is better to resist the romance of a false quotation when the real story is already powerful enough. A better-attested line, preserved in an early account of the voyage, presents Magellan as saying he would continue until he found “either the end of the land or some strait.” That is the more instructive Magellan: not a mascot for modern myth-making, but a navigator committed to testing the world by endurance, seamanship, and stubborn contact with reality. His expedition left Spain in 1519; Magellan himself was killed in the Philippines in 1521; and the voyage was completed by Juan Sebastián Elcano in 1522. What did that expedition accomplish? Not the first idea of a spherical Earth, which long preceded it, but a brutal confirmation at navigational scale that the planet was connected in a way theory had long suggested and that practice could now traverse. Later, with modern geodesy, satellites, orbital mechanics, and space photography—including direct global imagery from the Apollo era—the Earth would be described with even greater precision: not as a perfect sphere, but as an oblate spheroid, and more precisely still in relation to the geoid. In other words, our understanding of the world did not leap from flat to round in one clean stroke. It matured by stages as perception, reasoning, mathematics, instruments, and travel pushed against the limits of older models.
This is exactly why reasoning alone cannot carry the whole burden of understanding. Logic is only as strong as the premises and measurements available to it. In many domains of life, those premises are incomplete and those measurements do not yet exist. We often face problems for which present-day science is still immature, present-day sensors are too crude, or present-day language is too coarse. In such moments, feeling helps us move while knowledge is under construction. It is a provisional guide, not a final verdict. Sometimes it is brilliantly right and saves years of delay. A scientist senses that an anomaly matters before the full method is built. A founder senses a shift in the world before the market report catches up. A clinician senses that a patient is deteriorating before the numbers have fully turned. At other times, feeling is wrong—biased, fearful, miscalibrated, seduced by pattern where there is none. But that too is part of the human method. We act, we test, we revise. Civilization itself can be read as a long argument between intuition and correction.
The highest form of judgment, then, is neither raw instinct nor sterile rationalism, but a disciplined alliance between the two. Feeling tells us where to look. Reason tells us what survives scrutiny. Feeling is often fast because it compresses experience; reason is slow because it unpacks, compares, and checks. The wise mind does not worship either faculty in isolation. It lets intuition propose, then forces explanation to do its work. It lets the body register mismatch, then asks whether the mismatch is real. It lets a hunch open the door, then subjects the room beyond it to light. Some of the most important human advances have begun in exactly this way: with a suspicion, a discomfort, a fascination, a moral unease, or a strangely persistent attraction that did not yet have language strong enough to defend itself. Feeling gets us to the frontier. Reason builds the map once we arrive.
This becomes even more important when we think about the future of AI, AGI, and embodied intelligence. Much contemporary AI is astonishingly capable at abstraction: generating language, solving formal problems, identifying patterns in vast datasets, writing code, and producing plausible chains of explanation. But the world is not fundamentally made of text. It is made of friction, latency, mass, consequence, bodily vulnerability, social ambiguity, hidden context, and urgent value judgments under incomplete information. A machine can perform brilliantly in symbolic space and still fail badly in lived reality because it does not know what matters now in the way an organism does. In biological creatures, feeling helps solve that problem. It tags reality with urgency, aversion, care, risk, attraction, relevance, and moral weight. It helps a creature prioritize before explicit reasoning has finished. An embodied AGI worthy of the name may need something structurally analogous: not sentimentality, but an integrated architecture of salience and concern woven into perception, memory, prediction, and action. That future will likely demand more than larger models and more parameters. It will require systems that can bind world models to bodies, consequences, and local context. They will need richer sensor fusion, tactile understanding, continuous memory across time, on-device inference where latency matters, and action policies that are graded rather than brittle. A competent embodied machine will need to distinguish between a glass that can be grasped and one that is about to slip; between a human command that is linguistically clear and one that is emotionally hesitant; between a room that is physically unchanged and a room whose social meaning has shifted. It will need to know not only what is there, but what is fragile, what is risky, what is odd, what is out of place, what should wait, and what must be done immediately. In short, it will need more than reasoning in the narrow computational sense. It will need a way of marking significance in real time. This does not mean we must literally copy human emotions into machines in a crude theatrical form. The lesson is deeper than imitation. Human feeling is valuable not because it is always beautiful or always correct, but because it is an evolved system for prioritizing action in a world too complex to be fully reasoned through in advance. If embodied AI is ever to become genuinely world-competent—especially in medicine, caregiving, aviation, field operations, and other environments where uncertainty is normal—then its intelligence will need a counterpart to that prioritizing layer. Not a decorative chatbot personality, but something closer to machine relevance, machine caution, machine valuation, machine concern. Only then will an artificial system begin to move from mere calculation toward situated understanding.
So the old opposition between feeling and reason is too primitive for the future we are entering. Feeling is not the enemy of understanding. It is often the first rough sketch of understanding. Reason is not the enemy of instinct. It is the method by which instinct is corrected, enlarged, and made transmissible. Human beings have always known the world by this duet: sensing first, explaining later, correcting endlessly, and extending perception through new tools. The history of the Earth’s shape, from Aristotle’s eclipse argument to Eratosthenes’ measurement, from Columbus’s inherited spherical world and mistaken revision to Magellan’s voyage and later space-based precision, is one long proof of that process. We do not first possess complete knowledge and then move confidently through reality. More often, we feel our way toward truth while reason builds the instruments to catch up.
From Infinite Improbability to Generative AI: Navigating Imagination in Fiction and Technology
Human vs. AI in Reinforcement Learning through Human Feedback
Generative AI for Law: The Agile Legal Business Model for Law Firms
Generative AI for Law: From Harvard Law School to the Modern JD
Unjust Law is Itself a Species of Violence: Oversight vs. Regulating AI
Generative AI for Law: Technological Competence of a Judge & Prosecutor
Law is Not Logic: The Exponential Dilemma in Generative AI Governance
Generative AI & Law: I Am an American Day in Central Park, 1944
Generative AI & Law: Title 35 in 2024++ with Non-human Inventors
Generative AI & Law: Similarity Between AI and Mice as a Means to Invent
Generative AI & Law: The Evolving Role of Judges in the Federal Judiciary in the Age of AI
Embedding Cultural Value of a Society into Large Language Models (LLMs)
Lessons in Leadership: The Fall of the Roman Republic and the Rise of Julius Caesar
Justice Sotomayor on Consequence of a Procedure or Substance
From France to the EU: A Test-and-Expand Approach to EU AI Regulation
Beyond Human: Envisioning Unique Forms of Consciousness in AI
Protoconsciousness in AGI: Pathways to Artificial Consciousness
Artificial Consciousness as a Way to Mitigate AI Existential Risk
Human Memory & LLM Efficiency: Optimized Learning through Temporal Memory
Adaptive Minds and Efficient Machines: Brain vs. Transformer Attention Systems
Self-aware LLMs Inspired by Metacognition as a Step Towards AGI
The Balance of Laws with Considerations of Fairness, Equity, and Ethics
AI Recommender Systems and First-Party vs. Third-Party Speech
Building Products that Survive the Times at Robometrics® Machines
Autoregressive LLMs and the Limits of the Law of Accelerated Returns
The Power of Branding and Perception: McDonald’s as a Case Study
Monopoly of Minds: Ensnared in the AI Company's Dystopian Web
Generative Native World: Digital Data as the New Ankle Monitor
The Secret Norden Bombsight in a B-17 and Product Design Lessons
Kodak's Missed Opportunity and the Power of Long-Term Vision
The Role of Regulatory Enforcement in the Growth of Social Media Companies
Embodied Constraints, Synthetic Minds & Artificial Consciousness
Tuning Hyperparameters for Thoughtfulness and Reasoning in an AI model
TikTok as a National Security Case - Data Wars in the Generative Native World