For background on the concept, see our article on the Law of Accelerated Returns: https://www.robometricsagi.com/blog/ai-policy/the-law-of-accelerated-returns. The core idea is that breakthroughs compound, steepening capability curves and compressing adoption cycles. As context, GDP is typically measured by the expenditure approach—consumption (C) + investment (I) + government spending (G) + net exports (NX). Real GDP adjusts for price changes using chain‑weighted price indexes, while the value‑added and income approaches provide cross‑checks. Quality adjustments and hedonic methods exist in modern accounts, but most tools were designed for products that evolved yearly, not weekly.
Generative AI shows how a weekly cadence strains traditional measurement. Product capability can change in days via software pushes, new model weights, or tool integrations. Near‑term channels to GDP are straightforward: capital formation in models, data pipelines, and compute; business investment in AI software; consumer spending on AI‑enhanced services; public‑sector efficiency gains; and exports of AI software and services. What is harder to see: consumer surplus from “free” features, time saved outside paid work, firm‑specific intangibles like proprietary data and model weights, and rapid quality improvements that high‑frequency price indexes struggle to capture. Classification frictions add noise when AI spending is expensed instead of capitalized and when benefits show up as quality rather than quantity.
Lighthouse Nowcast
Blue hour. A slow beam cuts through sea mist and turns fog into shape. On the pier, our humanoid stands beside a brass sextant and a weathered logbook, listening to the water breathe. Nothing urgent happens; the scene is patient.
In the article, we argue that growth now shifts week to week. The lighthouse is our nowcast—each sweep gathers faint signals and makes them steerable. Those pale contours in the mist hint at traffic lanes of activity: training and inference time, tokens processed, API calls, seats activated. The sextant and logbook stand for an AI satellite account: careful notes that turn what was invisible into entries we can track. The robot is not rushing to compute; it’s learning how to read the weather of the economy.
Quarterly buoys still matter, but a coastline this active needs guidance in real time. Good measurement here is navigation, not ceremony—adjust the beam, check the bearings, write it down, and sail again. If innovation moves in weeks, the light must sweep that often, so the ship of policy and enterprise doesn’t aim at yesterday’s horizon.
Here’s a clear, practical playbook for measuring fast‑moving technology:
Create an AI satellite account. A satellite account is an auxiliary set of tables published alongside core GDP. It makes AI activity visible instead of lost in broad categories. Include: (a) investment in model development, compute, data collection/labeling, and AI software; (b) long‑lived intangibles such as model weights, curated datasets, and proprietary code; and (c) output from AI products and services, plus imputed savings when agencies or firms replace manual work with AI.
Build a weekly AI productivity nowcast. A nowcast is a near‑real‑time estimate. Use high‑frequency indicators that proxy production and use: GPU‑hours for training and inference, tokens processed, API call volumes, cloud AI spend, enterprise seat counts, and active users. Combine these into a single index, validate with brief firm surveys and transaction data, and publish the method so others can replicate it.
Improve price measurement. Deflators are the price indexes used to strip out inflation so we can see “real” growth. Update them more often for AI categories and adjust for quality using rapid hedonic methods—statistical techniques that separate true performance gains from simple price moves. Example proxies: cost per trillion tokens at a fixed accuracy, cost per solved benchmark task, or latency at a given model size.
Avoid double counting. Set simple rules: capitalize internal data creation and model training once; separate intermediate AI services used by firms from final AI services sold to end users; and split spending between operating costs and investment when it creates a long‑lived asset.
A quick walk‑through. A company ships an AI copilot update in May. Record new subscription revenue (final output). If the feature is free, estimate time saved using time‑use surveys multiplied by wage rates (imputed output). Capitalize the training run and data curation that produced the new model weights (investment). Update the relevant price index using a quality‑adjusted metric such as cost per successful task. Feed weekly usage telemetry into the nowcast to track momentum.
In short, if innovation now moves week to week, measurement must learn at the same pace—so that real productivity gains show up in the statistics, not only in product demos.