My research studies how the costs of acquiring, organising, and communicating knowledge shape the structure of firms, the distribution of wages, and economic growth. This agenda, which spans two decades, provides the theoretical and empirical foundations for understanding both the IT revolution and the current AI transformation.
Knowledge hierarchies and the organization of firms. I introduced the concept of "knowledge hierarchies" (JPE, 2000) to explain why firms are structured as they are. When expertise is costly to acquire and communication is imperfect, firms adopt management by exception: production workers handle routine problems, and specialists deal only with the hardest and more unusual cases. This division of cognitive labor allows expert knowledge to scale efficiently.
Building on this foundation, my extensive collaboration with Esteban Rossi-Hansberg (2006; 2012; 2015) and also Pol Antras (2006), and with Willie Fuchs and Luis Rayo (2015) embedded these hierarchies into general equilibrium models. We showed how the ability to organize knowledge determines critical macroeconomic outcomes. The way firms leverage expertise dictates not only their internal productivity but also patterns of wage inequality, the dynamics of international trade, and the structure of global supply chains. With Tom Hubbard I tested empirically predictions using US Census data of US law firms (2009, 2016, 2018)
The Information and Communication Technology (ICT) changed the costs of knowledge transfer. Crucially, my research with colleagues (Nick Bloom, Raffaella Sadun, and John Van Reenen, 2014) disaggregated empirically the effects of these technologies. We found that, consistently with the theory, tools reducing the cost of accessing information (like databases or ERP) tend to empower lower-level workers, allowing them to solve more problems autonomously and leading to decentralized decision-making. Conversely, tools reducing the cost of communication (like intranets or email) facilitate the application of centralized expertise, allowing managers to direct more subordinates and often leading to greater centralization. The evolution of the modern knowledge economy has been shaped by the tension between these two forces. With Paul Heaton (Journal of Labor Economics 2010), I studied IT adoption across US police departments from 1987 to 2003. We found that IT alone had negligible effects on crime-fighting effectiveness. It improved productivity only when complemented by specific management practices, such as those in New York's CompStat programme. The finding that technology requires organisational redesign to deliver results has become a recurring theme across my work.
Artificial intelligence and the future of work. AI represents a qualitative shift: for the first time, machines can accumulate and deploy tacit knowledge, collapsing toward zero the cost of applying expertise at scale. This directly challenges the premise underlying knowledge hierarchies, that specialised knowledge is scarce because it is tied to human time. The theoretical tools developed over decades to study human knowledge organization are indispensable for understanding how AI will reshape firms, the future of expertise, careers , job bundles, and the structure of the economy.
My current research addresses the consequences along several dimensions. First, with Li and Wu (HKU), I have developed a theory of how AI reshapes job boundaries ("Weak Bundle, Strong Bundle: How AI Redraws Job Boundaries," 2026). The standard approach to AI and labour markets counts how many tasks within an occupation are "exposed" to automation and treats high exposure as high risk. We show this is incomplete. Labour markets do not price tasks; they price bundles of tasks called jobs. What determines whether AI displaces or augments a worker is the cost of breaking the bundle apart. In "strong-bundle" occupations, where tasks are bound together by shared context, accountability, or complementarities, separating the AI-automatable component from the rest destroys value. The bundle holds, AI assists within the job, and the worker retains both tasks. In "weak-bundle" occupations, where tasks can be separated at low cost, AI performs the codifiable piece autonomously and the human role narrows. This framework provides a tool for identifying which occupations face genuine displacement risk and which will see AI-augmented productivity gains.
Second, with Rayo (Northwestern), I study how AI threatens the apprenticeship model that sustains human capital formation ("Training in the Age of AI," 2025). We show that a single statistic, the expertise leverage ratio, governs whether apprenticeships survive or collapse as AI automates the routine work that juniors traditionally perform in exchange for training. Third, in a forthcoming NBER chapter ("The Economics of Superabundant AI," 2025), I analyse how the interaction between AI autonomy and compute scarcity determines whether AI augments all workers or displaces some, and argue that displacement is avoidable if firms can create new addressable opportunities at sufficient pace. Fourth, with I have spent a year studying the rollout of generative AI inside BBVA, one of Europe's largest banks, documenting how bottom-up, trust-based adoption strategies outperform top-down corporate mandates. Separately, with Brzezinski (Oxford), I use large language models as a research tool to study how political narratives on climate policy shift in response to economic shocks ("Narrative Entanglement in Climate Policy," 2025), demonstrating the broader methodological value of LLMs for empirical social science.
I discuss all of these pieces in turn.
Weak Bundle, Strong Bundle: How AI Redraws Job Boundaries : With Jin Li and Yanhui Wu, in this March 2026 Working Paper we study the difference between tasks and jobs. Here is the abstract: We show that the effect of AI on an occupation depends not just on which tasks AI can perform but also on how costly it is to unbundle those tasks from the job. Much of the discussion of AI and labor markets starts from task exposure: if AI can perform more tasks in an occupation, that occupation should lose employment or earnings. This is incomplete because labor markets price jobs, not tasks. Jobs bundle tasks together, and the effect of AI depends on how costly it is to break the bundle. We build a two-task model in which AI can either assist one task inside a bundled job or supply that task autonomously while a human supplies the residual task. We show that, in weak-bundle occupations, AI automates some tasks and narrows the boundary of the job, leading to the standard task-substitution channel. In strong-bundle occupations where tasks are not independently reallocable, AI improves performance inside the job, but does not remove the human from the bundle. Thus, bundling provides a force that protects jobs and the labor share.
Training in the Age of AI: with Luis Rayo (NWU), This September 2025 working paper studies the impact of GenAI on the career ladder: "Training in the Age of AI: A Theory of Apprenticeship Viability." Here is the abstract: Apprenticeships let juniors pay for training by doing menial work. AI now performs an increasing share of that work, putting the bargain at risk. We introduce AI into a dynamic apprenticeship model with an automation threshold and possible complementarity for experts. A single statistic—the expertise leverage ratio, measuring the AI-augmented value of a graduate relative to AI’s standalone output—governs the impact of AI. Our central result is that apprenticeships are guaranteed viable, in the sense that they are at least as profitable as they were before the arrival of AI, when this ratio is above a critical threshold, specifically Euler’s number e; in this case, training has a fixed duration and the apprenticeship is not at risk. Below the threshold, training compresses as the master’s saleable knowledge shrinks; in this case, advances in AI threaten wholesale apprenticeship collapse.
Autonomy and Scarcity: This paper is forthcoming as a chapter for an NBER volume a paper on Economics of Transformative AI, edited by Ajay K. Agrawal, Anton Korinek and Erik Brynjolfsson and published by University of Chicago Press: "The Economics of Superabundant AI: Autonomy, Scarcity and the Future of Work". The paper analyzes how "superabundant" AI can simultaneously augment and displace workers. The outcome depends on what remains scarce. When compute is scarce, or the AI is non Autonomous, AI is a "co-pilot," and human time retains value. If compute is abundant and AI is autonomous, "opportunities" or "slots" become the bottleneck, displacing low-skill humans. If compute is abundant but AI is non-autonomous, human input is the bottleneck, and all humans work, but wages compress. Hence the paper argues displacement is avoidable. If firms can create new "addressable opportunities" at a cost lower than the value AI provides, they will. This keeps compute scarce and sustains human employment.
AI implementation: I spent, together with Antonio Cabrales and Toni Roldán, almost a year studying with a BBVA team the rollout of the implementation of OpenAI's ChatGPT in BBVA. We recently submitted the resulting case study for publication in a business journal "Trust as a Scaling Strategy: How Internal Entrepreneurs Drive Corporate AI Adoption", co-authored by Elena Alfaro, Antonio Cabrales, José Elías Durán Roa, Luis Garicano, Isabel Pérez del Caño, Toni Roldán Monés, Guillermo Vieira de Santiago. We argue that most top-down, corporate GenAI programs disappoint. Why not, since value in GenAi is generated to a large extent from the bottom‑up use instead your employees’ technical and entrepreneurial talent to scale the rollout of GenAI in the organization? Can this be done safely and effectively? The job is to achieve this in a safe, visible, and scalable manner—fast—then build the organization around it. We believe that BBVA’s human-centered, bottom-up strategy demonstrates how to do that effectively at scale, in a heavily regulated industry.
Narratives: I also recently submitted for publication a paper with Adam Brzezinski that, while not on AI, heavily uses Large Language Models as a tool: Narrative Entanglement in Climate Policy. Our starting point is that political narratives on climate policy have turned more skeptical despite evidence of climate urgency. We explain this shift with a theory of narrative entanglement: to appeal to voters, politicians intertwine economic and environmental narratives rather than treating them separately. Hence, shocks unrelated to climate change can impact environmental narratives. We test our theory in the context of Russia’s invasion of Ukraine, which affected the economic costs of the European Green Deal without changing its impact on emissions. We use large language models to identify climate narratives across all speeches in the 9th European Parliament (2019-2024). Exploiting only variation within each parliamentarian, we show that after the invasion, narratives become both more negative in the cost assessments of climate policies and more skeptical about their environmental impact
I am currently writing a book with Jin Li and Yanhui Wu (HKU) on AI in organizations. I will keep you updated on this.
In our substack Silicon Continent (with Pieter Garicano) we have focused a lot of our attention on AI. Here are some of my writings.
R without G: Good news on AI could be bad news for Euro,
An AI driven growth acceleration could paradoxically create debt sustainability problems.
The AI Becker Problem: Who will train the next generation?
The initial version of the training problem above.
The smart second mover with Jesús Saa-Requejo.
A policy proposal for developing AI in Europe.
Can AI solve Europe’s problems?: Baumol's disease, regulatory resistance, and the O-ring problem.
Obstacles to AI driving large productivity gains.
The Compliance Doom Loop: Why the rules keep growing
Not only about AI, but about Europe creating a huge compliance economy but wait till you see the AI agency
How to think about the economic impact of AI: The scarce factor is humans’ cognitive ability.
A discussion of the Knowledge-Hierarchies view of Ai.
Is GDPR undermining innovation in Europe?:
The General Data Protection Regulation (GDPR) was supposed to be Europe's big move to protect consumer privacy and reassert its technological relevance.