AI Externalities Lab

When inference dominates training

Abstract

We model the total energy footprint of a deployed AI system as a sum of one-time training energy and cumulative inference energy over time. The central goal is to derive closed-form dominance thresholds (when inference overtakes training), study how those thresholds shift under adoption dynamics, and identify which parameters are the highest-leverage targets for efficiency.

Model

Let

Define total energy up to horizon :

Define inference energy:

Result 1: break-even (inference-dominance) time

Define by

Constant usage

If , then

Interpretation: inference dominates sooner when efficiency is worse (larger ) or usage is higher (larger ).

Result 2: exponential adoption

Assume

Then

and break-even is

Roadmap

Notes

This project is intentionally math-first: the model should stay interpretable, and every assumption should be explicit.