AI and Scale: A Quantitative Task-Based Theory of Automation
Coauthors: Wensu Li, Christina Qiu, and Neil Thompson
First Draft: March 2025 / Current Draft: March 2026
First Draft: March 2025 / Current Draft: March 2026
AI automation typically requires task-level fixed costs, such as model training or fine-tuning. We develop a quantitative task-based framework in which automation depends jointly on relative marginal costs and production scale, because scale helps amortize fixed adoption costs. Applying the model to computer vision automation, we discipline the fixed (training) and marginal (inference) costs using AI scaling laws that relate computational inputs to AI performance. We estimate the scaling law for fine-tuning vision AI models and rely on LLM-generated measures of task complexity and error tolerance to infer the computational costs of automation for all vision-related tasks in the economy. Calibrated to U.S. firm-level adoption of computer vision AI in 2023 and combined with current estimates of the trend in falling computing costs, the model projects rapid diffusion, reaching 23% of firms (60% employment-weighted) by 2035. Scale advantage accounts for roughly three-quarters of variation in AI comparative advantage across tasks. As computing costs fall, real output rises by about 7% by 2075, real wages increase throughout, and the labor share follows a U-shape, reflecting a decline in aggregate elasticity of substitution between labor and computer vision AI as automation deepens.