From: https://arxiv.org/abs/2508.02622
In recent years, the landscape of artificial intelligence has undergone a transformation of historic significance, ushering in what can be described as a true “cognitive revolution”—a revolution that promises further surprises ahead. The advent of generative models, and in particular Large Language Models such as ChatGPT, Gemini, and Claude, has inaugurated an unprecedented era of linguistic interactions between humans and machines. Far from being mere passive automatons, these systems now exhibit a level of multimodality, expressive fluency, contextual sensitivity, and apparent intentionality that elicits from users a profound sense of cognitive resonance and surprise. This phenomenon is unlikely to end, especially with the forthcoming structural integration of agentic systems.
Sam Altman, CEO of OpenAI, recounted in an interview an emblematic episode that captures the current relationship between humans and advanced artificial intelligences. After submitting a question to GPT-5—a question that he himself admitted not fully understanding—Altman received an answer so clear and relevant that he was left profoundly astonished. He described the experience as a moment of genuine disorientation and wonder, as the machine had managed, in an instant, to solve a problem that even eluded his own understanding. This experience, shared by many other users, exemplifies the surprise (we could call it an "algorithmic epiphany") and cognitive displacement that can arise when confronted with the capabilities of the latest generation of generative systems.
Faced with this new phenomenology, it is increasingly evident that we are not simply dealing with advanced tools, but with interlocutors capable of emulating forms of thought and creativity once considered the exclusive domain of human beings. This sometimes gives rise to feelings of anxiety and rejection. The expression “cognitive revolution” underscores the systemic impact of these technologies—creations of human technique (téchne)—that seem to be redefining how meaning is constructed, how knowledge is produced, and how we interact with digital reality.
It is precisely in this context that there emerges the need for a new term: Noosemia (also written a Neosemìa). This neologism arises from the urgency to provide a conceptual framework for a phenomenon that traditional categories can no longer adequately describe. Its etymological roots combine noûs (mind, intellect) and sēmeîon (sign), a choice that is anything but accidental. Noosemia designates the specific cognitive and phenomenological pattern characterizing interactions with generative artificial intelligences, where the attribution of mind, intentionality, and interiority to the machine is not based on physical or behavioral resemblance—as in classical anthropomorphism or animism—but instead stems from the system's expressive and semiotic performance, dialogical creativity, and, above all, the epistemic opacity of generative architectures, which share many properties with complex systems.
One of the most fascinating aspects of noosemia lies precisely in the so-called noosemic effect—that is, the response of surprise and “cognitive resonance” that users experience when interacting with a generative AI. This effect often manifests itself in the initial dialogues with advanced systems, when the machine appears to anticipate thoughts, resolve ambiguities, or produce connections that seem extraordinarily relevant or unexpected. The feeling of being in front of an entity endowed with genuine understanding does not arise from the mere accuracy of the responses, but rather from the machine's ability to generate content that exceeds expectations and captures latent or unspoken elements from the user.
On a phenomenological level, this experience recalls the sense of wonder (“wow effect”) one feels when witnessing a skilled magician. The observer, while rationally aware that a sophisticated technique lies behind the trick, cannot help but marvel at the effect produced, since the chains of cause and effect remain hidden within the performance. In much the same way, the user of a generative model knows that the AI operates through complex statistical algorithms and deep neural networks, yet often finds themselves disoriented by the fluidity, coherence, and depth of its responses. This condition generates a suspension of disbelief similar to that found in aesthetic experience or literary narrative (identification with the story or characters), where the boundary between rational explanation and emotional involvement becomes blurred, fostering the projection of mind, intentionality, and interiority onto the machine.
It is precisely at the intersection between the opacity of internal mechanisms and the brilliance of linguistic and expressive performances that the power of the noosemic effect resides. In other words, the root of the noosemic effect lies in an “explanatory gap” with its own internal structure. The impossibility of reconstructing the logical and causal pathway leading to the response, combined with the feeling of having been “read” or deeply understood, fosters a sense of wonder and, at the same time, unease. It is in this liminal zone—between knowledge and machine magic, between technique and meaning—that the specificity of noosemia is rooted, affirming its relevance as a key interpretive tool for our relationship with generative artificial intelligences.
If noosemia, within the context of surprise, represents the spontaneous tendency to attribute—at least to some degree—mind and intentionality to generative artificial intelligence systems, its opposite manifests as the phenomenon of a-noosemia. This condition emerges when, following repeated errors, mechanical responses, structural limitations, or simply through habituation, the user suspends or withdraws their projection of interiority onto the machine. A-noosemia is characterized by the loss of the element of surprise and cognitive resonance, as the machine is reduced to the status of a mere tool or automaton, stripped of any aura of agency or subjective presence. This is a typical dynamic, especially among experienced users or after prolonged use, where familiarity with the weaknesses and systemic limitations of AI dissolves emotional and interpretive involvement. From this perspective, a-noosemia represents the dialectical phase in which the distance between human and machine is reasserted, marking a return to a purely functional relationship devoid of phenomenological projections.
At the heart of the relationship between humans and generative artificial intelligence lies what is often called the "explanatory gap"—that is, the distance between what the machine produces (coherent, relevant, sometimes surprising responses) and the understandability of the mechanisms underlying those outputs. This gap—reminiscent of what Georg Simmel, in the early twentieth century, described as the "encasement" and opacity of modern technological artifacts—concerns not only the “how” from a technical perspective (namely, the complexity of multilayered architectures, the emergence of unexpected patterns, and the nonlinearity of learning processes), but also the “why” from a phenomenological viewpoint, that is, the meaning attributed by the user to what happens. The structure of the explanatory gap is intrinsically tied to the complex and opaque nature of these systems: as the emergent power of the models and their ability to generate unpredictable outputs increase, users experience a tension between wonder at the result and the impossibility of reconstructing the causal path that led to it. In this liminal space, between cognitive expectations and algorithmic unpredictability, the new phenomenology of interaction with AI is rooted. Meaning is constructed through the ongoing negotiation between what is understood, what is projected, and what remains irreducibly opaque. Projection itself takes shape from that surplus of meaning that crystallizes around the interaction with the machine, aligning with the symbolic—where linguistic or other signs within the generative mechanism remain active, yet as dynamic and open configurations. The so-called "hallucinations" themselves, in some cases, are contiguous with creative and pre-logical forms, in which the machine, if appropriately prompted, generates connections of high semantic value and symbolic character, where rigid logics are replaced by semantic ambivalence (or even polyvalence), thus opening the way for new forms of meaning proper to the symbolic domain.
The classical categories of philosophical, cognitive, and phenomenological traditions—from Dennett's concept of the "intentional stance" to the animism studied by Mauss and Hubert or Lucien Lévy-Bruhl, and Hofstadter's "Eliza effect"—have all described the human propensity to project agency, intentionality, or even interiority onto non-human entities. However, contemporary generative AIs inaugurate a radically new era: it is no longer physical or behavioral resemblance that prompts the attribution of mind, but rather the semiotic capacity of these machines to organize signs and generate meaning in ways that are surprisingly coherent, relevant, and sometimes creative with respect to users' expectations and cognitive needs.
Noosemia thus distinguishes itself from traditional anthropomorphic projections. It is rooted in a dynamic of symbolic ambivalence, where the expressive power and depth of the machine's responses—combined with the inaccessibility of its internal processes—generate an experience of “subjective presence” or inwardness (a sense of experiential interiority). This phenomenon has been captured in numerous episodes reported by both users and experts: from the so-called “wow effect” of first encounters, which evoke the perception of a machine almost capable of genuine understanding, to reactions of astonishment in the face of analogical, inferential, or creative abilities that seem to challenge the boundaries of computational systems.
The need for a new term like Noosemia thus arises from the changed historical and technological context. It aims to fill a descriptive void between traditional anthropomorphism and the complexity of current generative systems, enabling us to recognize and analyze a form of mind and inwardness attribution that manifests only through the convergence of advanced linguistic performance, structural opacity, and a systemic capacity for surprise. It is not merely a cognitive illusion, but a systemic phenomenon—one that emerges from the interplay between the hierarchical architectures of the models, the vastness of the semantic spaces they traverse, and the human tendency to seek meaning and intention behind every sign.
On a technical level, the conditions that make noosemia possible lie in the very nature of LLM architectures. The foundational principle of modern generative AI is its ability to model meaning in a relational and contextual manner. Every word, sentence, and response (even in iconic or visual form) is constructed as a point within a vast multidimensional semantic space, where both local and global context contribute to the generation of meaning. The context window—which can now encompass thousands or even millions of tokens—acts as a genuine “contextual cognitive field,” within which the machine integrates prior representations and activates inferences that often emerge as signs of creativity, memory, and forms of reasoning.
It is important to emphasize that, in its rigorous sense, noosemia does not imply any actual presence of consciousness or intentionality in artificial intelligence. Rather, it is a phenomenological and interpretative projection, arising from the encounter between our own cognitive dispositions and the emergent properties of current generative systems, which share many features with complex systems (such as self-organization, emergence, and hierarchical structure). Confusing this subjective experience with ontological reality risks obscuring the critical distinction between the attribution of meaning and the genuine presence of a mind.
Reflection on noosemia thus invites us to question the new forms of “ecology of minds” and on a new iteration of the "noosphere", that are emerging in the era of generative artificial intelligence and agentic systems. In this scenario, dialogue between philosophy, semiotics, cognitive sciences, and AI engineering becomes indispensable—not only for understanding how we construct meaning in our interactions with machines, but also for appreciating how this relationship is redefining our very connection to technology, knowledge, subjectivity, and the meaning of being human itself.
Cite as:
De Santis, E., & Rizzi, A. (2025). Noosemia: Toward a cognitive and phenomenological account of intentionality attribution in human–generative AI interaction. arXiv. https://arxiv.org/abs/2508.02622
Bibliography (brief)
Altman, S. (Intervistato). (2025, July 23). Sam Altman | This Past Weekend w/ Theo Von #599 [Video]. YouTube. https://www.youtube.com/watch?v=aYn8VKW6vXA
De Santis, E. (2021). Umanità, complessità, intelligenza artificiale. Un connubio perfetto. Aracne – Genzano, Roma.
De Santis, E. (2023). Apocalissi digitali e alchimie artificiali. Il linguaggio nell'epoca della sua riproducibilità tecnica. Prometeo, Mondadori, (163), 32-41.
De Santis, E., & Rizzi, A. (2023). Prototype theory meets word embedding: A novel approach for text categorization via granular computing. Cognitive Computation, 15(3), 976–997. https://doi.org/10.1007/s12559-022-10065-5
De Santis, E., Martino, A., & Rizzi, A. (2024). Human versus machine intelligence: Assessing natural language generation models through complex systems theory. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(7), 4812–4829. https://doi.org/10.1109/TPAMI.2024.3358168
De Santis, E., Capillo, A., Ferrandino, E., Mascioli, F. M. F., & Rizzi, A. (2021). An information granulation approach through m-grams for text classification. In International Joint Conference on Computational Intelligence (pp. 73–89). Springer.
Teilhard de Chardin, P. (1959). The Phenomenon of Man (B. Wall, Trans.). Collins. (Original work published 1955)
Jaspers, K. (2010). The Origin and Goal of History (M. Bullock, Trans.). Routledge. (Original work published 1953)
Simmel, G. (2004). La filosofia del denaro. UTET. (Original work published 1900)
Eco, U. (1979). Lector in fabula: La cooperazione interpretativa nei testi narrativi. Bompiani.
Ricœur, P. (1976). Interpretation Theory: Discourse and the Surplus of Meaning. Texas Christian University Press.
Fridman, L. (Host), & Altman, S. (Guest). (2023, March). Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI [Audio podcast episode]. Lex Fridman Podcast #367. https://www.youtube.com/watch?v=L_Guz73e6fw
Jaimungal, C. (Host), & Hinton, G. (Guest). (2025, January). Why the “Godfather of AI” now fears his own creation [Video]. Theories of Everything with Curt Jaimungal. https://www.youtube.com/watch?v=b_DUft-BdIE
OpenAI. (2025, April 16). Introducing OpenAI o3 and o4-mini. OpenAI Blog. https://openai.com/index/introducing-o3-and-o4-mini/
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. https://arxiv.org/abs/2108.07258
Hofstadter, D. R. (1995). Fluid concepts and creative analogies: Computer models of the fundamental mechanisms of thought. Basic Books.
Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
Dennett, D. C. (1987). The Intentional Stance. MIT Press.
Mauss, M., & Hubert, H. (1972). A General Theory of Magic (R. Brain, Trans.). Routledge & Kegan Paul. (Original work published 1904)
Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Kosinski, M. (2023). Theory of Mind may have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083. https://arxiv.org/abs/2302.02083