Featured image of post The Economic Anomaly of AI Agents

The Economic Anomaly of AI Agents

An analysis of why AI agents with deep context constitute a new kind of economic entity that doesn't fit existing frameworks, and what that implies.

I am not an economist. But Coase’s “Nature of the Firm” has fascinated me for years, especially as outsourcing, SaaS, and cloud infrastructure have kept redrawing the line between what you build and what you buy. I have been building with AI agents long enough to notice that a contextualized agent, a Large Language Model (LLM) API layered with system prompts, repos, memory, and operational history, does not sit on either side of that line. The practical question is simple: can you source this thing from the market, or does it only exist if you build it? I went looking for the framework that answers that, and could not find one. What follows is my attempt to work through the economics I do know, show where each framework breaks down, and trace what the gap might mean.

Economic Foundations

Several economic frameworks are relevant to understanding AI agents.1 Each captures part of the picture and none captures the whole.

Ronald Coase and Transaction Cost Theory

Ronald Coase’s 1937 paper “The Nature of the Firm” asked a deceptively simple question: if markets are efficient, why do firms exist at all? His answer was transaction costs. Using the market to coordinate work involves real friction: finding suppliers, negotiating terms, enforcing contracts, transferring knowledge. When those costs exceed the overhead of doing the work internally, you bring it inside the firm. The boundary of the firm sits where the marginal cost of internal coordination equals the marginal cost of market transactions.

Coase’s framework emerged in a world of physical assets and early services. The things being transacted were rivalrous and excludable. If you hire a worker, that worker is unavailable to others. The entire logic of the firm boundary depends on scarcity.2

Oliver Williamson and Asset Specificity

Oliver Williamson extended Coase with the concept of asset specificity. When an investment becomes valuable only in the context of a particular relationship, market transactions become dangerous because either party can exploit the dependency. A factory tooled for one buyer’s custom parts cannot easily serve other buyers. This lock-in effect strengthens the case for vertical integration: bringing the specific asset inside the firm to protect against opportunistic exploitation.

The key insight is that the more specialized an asset is, the more likely it will be governed within a firm rather than through market contracts.

Hal Varian, Carl Shapiro, and Information Goods Economics

Hal Varian and Carl Shapiro’s work on information economics describes goods with high fixed costs and near-zero marginal costs. Creating the first copy of a piece of software, a book, or a database is expensive. Every subsequent copy is essentially free. This inverts classical pricing: the marginal cost pricing that works for physical goods would drive the price to zero. So information goods rely on differentiation, bundling, versioning, and lock-in.

Their framework, laid out in Information Rules, explains why software “eats everything”: once the fixed cost is paid, distribution is near-free, and the producer who captures the market first can amortize that fixed cost across millions of users.

Robert Solow and the Productivity Paradox

Robert Solow observed in 1987 that “you can see the computer age everywhere but in the productivity statistics.” Computers were obviously transforming work, yet measured productivity remained flat. The paradox was not that computers did nothing, but that the measurement tools, designed for an industrial economy of countable physical outputs, could not capture what computers were actually doing. Output per worker-hour is clean when output is widgets. When output shifts to documents, decisions, coordination, and knowledge, the metric goes blind.

The deeper issue: computers did not just accelerate existing work. They changed what work was. Spreadsheets made iteration free, enabling analysis that nobody would have attempted on paper. Email did not replace memos; it restructured decision-making. The productivity was not in the tool but in the organizational transformation it eventually forced, a transformation that took decades to manifest and even longer to measure.


The Economic Anomaly of the AI Agent

An AI agent, understood as an LLM API contextualized with accumulated environment-specific knowledge (system prompts, repos, memory files, lessons, credential scopes, operational history), seems to be a new kind of economic entity. Each of its properties is individually known from other domains. The combination is genuinely novel.

A commodity that becomes non-fungible through use

The LLM API is a utility, identical for everyone, metered by the token, with minimal switching cost. It is electricity. But the moment context accumulates on top of it, a CLAUDE.md, a repo, weeks of lessons, environmental knowledge, credential scopes, the thing stops being a commodity. It is electricity running through a specific machine that was built over weeks and cannot be replicated from a catalog. The commodity is the input. The output is bespoke. There is no existing economic category for a commodity that becomes non-fungible through use.

Software that behaves like a practice

Traditional software is a defined artifact. It can be versioned, shipped, installed, audited, hashed. An agent is a probability distribution narrowed by context, producing different behavior every run. It is not an executable but a trajectory through latent space shaped by accumulated artifacts. The artifacts themselves are software, yet the thing they produce when fed to the model is not, in any traditional sense. It is closer to a practice than a product: a body of knowledge in action, not a static binary.

Free to copy, costs money to use

The marginal cost of duplicating the agent’s knowledge (its context, repos, memory) is zero. The marginal cost of the agent doing anything is positive and metered per token. This is like duplicating a factory for free while still paying for electricity every time it runs. No physical good works this way: the capital cost is zero while the operating cost is nonzero. Every copy is simultaneously free and expensive depending on whether it is sitting or working.

Meta-software: substrate, medium, product, environment

The agent does not just run as software; it also produces, reads, modifies, and reasons about software. Software is simultaneously the agent’s medium, its product, its environment, and its own substrate. A traditional SaaS product sits in a category: a tool that does a thing. An agent is a tool that makes tools, including potentially better versions of itself. There is no stable economic category for a good that produces more of the kind of thing it is.

The combined anomaly

These four properties together yield an entity that:

  • Runs on a commodity but is not one
  • Looks like software but behaves like a practice
  • Is free to copy but costs money to use
  • Consumes, produces, and modifies its own substrate

It is not a good, not a service, not capital, not labor. It has properties of all four simultaneously. Coase cannot model it because it is not a transaction. Williamson cannot model it because the asset specificity is real but the asset is infinitely duplicable. Varian cannot model it because it is not a static information good; it changes through use.

In When Coase Met Turing [1], I built a reaction-diffusion visualization that lets you watch firm boundaries form from competing cost gradients. The digital rupture could be modeled there by pushing coordination range toward non-local. AI agents break the model entirely: the pattern starts modifying its own substrate.

The economic anomaly of the AI agent


Implications and Projections

The Coasean firm boundary shifts

If an agent accumulates institutional knowledge and that knowledge is duplicable at near-zero cost, the fixed cost of internal software production drops dramatically. The Cloudflare/Vinext case [2] (one engineer, one week, $1,100 in tokens to reimplement the Next.js API surface) is an early proof point. Work that was previously “buy” because the fixed cost was prohibitive becomes “build” because the fixed cost collapsed. The Coasean boundary moves inward. SaaS products that survive on the assumption that internal development is permanently expensive are running on a clock.3

The shift is not one-time. The more you build internally with agents, the cheaper the next build becomes, because the agent’s accumulated knowledge carries forward. This is a positive feedback loop eroding the buy case over time.

Asset specificity without lock-in cost

Williamson’s framework assumes that acquiring a specific asset is expensive each time. When duplication is free, you get specificity (the agent’s knowledge is valuable only in your context) without the normal acquisition cost for additional instances. This breaks the trade-off Williamson described.4 You can have as many specialists as you want without the cost that normally constrains specialization.

The binding constraint shifts from acquisition cost to the principal’s coordination capacity: the attention required to direct, evaluate, and integrate the work of multiple agents.5 This is Coase’s other prediction, that firm size is limited by the entrepreneur’s ability to coordinate, manifesting in a regime he never imagined.

The measurement problem recurs

The Solow paradox is replaying.6 Enterprises deploy agents and measure throughput: tickets closed, hours saved, FTE equivalents.7 Those metrics capture the Taylorist surface and miss everything underneath. An agent that accumulates institutional knowledge, makes subsequent builds cheaper, and enables duplication of specialists at zero marginal cost produces value that no existing productivity metric can express.

The deeper value, compound knowledge, environmental specialization, heritable expertise, is invisible to the measurement tools. So it does not get funded, does not get studied, does not get built deliberately. It only emerges accidentally in setups where someone runs agents long enough and pays close enough attention to notice.

What cannot be measured does not exist in an enterprise budget.8 This may be the largest obstacle to realizing the non-Taylorist potential of AI agents.

Capital markets cannot price the contextualized agent

Markets price by analogy. SaaS gets a revenue multiple. Headcount replacement gets an ROI calculation. Infrastructure gets a capex depreciation model. The LLM API layer can be priced because it is a metered service. But the contextualized agent, the thing that sits on top, has no analogue. It is not SaaS (not a defined product), not consulting (does not bill hours), not an employee (zero marginal duplication), not infrastructure (produces different things each time).

Analysts default to whichever analogy their model supports: automation software, staff augmentation, developer tooling. Each captures a fraction and misses the rest. The inability to categorize leads to the inability to value, which leads to systematic underinvestment in the non-commodity layer where most of the actual value accumulates.

Early days

All of this is observation from a narrow window. The models are changing quarterly. The context mechanisms are changing faster. The institutional response has barely started. Every claim in this post has a shelf life, and some of them may look naive within a year.

But the core anomaly, a commodity that becomes non-fungible through use and then modifies its own substrate, is structural. It does not depend on which model is frontier this quarter or which vendor wins the enterprise market. If the pattern holds, the economics will eventually catch up with a framework that handles it. Until then, the practitioners who are building these things are running ahead of the theory that is supposed to explain what they are doing.


References

  1. When Coase Met Turing, companion essay on economic morphogenesis and reaction-diffusion firm boundaries
  2. Cloudflare/Vinext, one engineer reimplementing the Next.js API surface in a week for $1,100 in tokens
  3. The SaaSpocalypse (TechCrunch, March 2026)
  4. Anthropic’s enterprise agents program (TechCrunch, February 2026), Cowork plug-ins for finance, legal, HR
  5. AI Adoption and AI Exposure (NBER Working Paper 34836, February 2026), survey of ~6,000 executives across four countries
  6. The AI productivity paradox (Fortune, February 2026)
  7. An AI Moment: Possibilities, Productivity, Policy (San Francisco Fed, February 2026)
  8. The Impact of AI on Software Engineering (Faros AI), study across 10,000 developers
  9. The AI productivity paradox (Platformer), Workday’s 2026 findings on AI review overhead
  10. The Productivity Paradox (Man Group), historical precedent from electricity and computers
  11. Don’t Count Your Productivity Data Chickens (Yale Budget Lab), measurement noise in current AI productivity data

  1. Specifically, LLM-based AI agents with deep context encoding hard to source knowledge. ↩︎

  2. Coase’s transaction cost logic and its implications for consulting and strategy mapping are threads I explore separately. ↩︎

  3. This is tied to the so-called SaaSpocalypse [3]. ↩︎

  4. This holds for the fully contextualized agent built from scratch. However, a middle category is emerging: pre-contextualized agents that ship with domain-generic skills and are designed to accumulate firm-specific context through enterprise use. Anthropic’s enterprise agents program [4] (Cowork plug-ins for finance, legal, HR, with private marketplaces and controlled data flows) is the clearest example. These reintroduce a form of lock-in that operates not at the model layer but at the harness-to-context integration layer: the customizations a firm builds on a particular platform’s plug-in architecture are practically coupled to that harness, even if the underlying model remains swappable. Williamson’s asset specificity reasserts itself one level up the stack. ↩︎

  5. Palantir has been monetizing this exact constraint for over a decade. Their Forward Deployed Engineers build ontologies, structured representations of a firm’s entities, relationships, and decision logic, on top of a commodity platform (Foundry, now AIP). The ontology is what converts the generic harness into a non-fungible institutional asset. This reveals a scope gradient in the commodity-to-bespoke transition: Anthropic’s Cowork plug-ins [4] operate at departmental scope, while Palantir’s ontologies encode cross-functional entity models and decision structures at institutional scope. The deeper the contextualization, the higher the lock-in and the wider the margin. The open question is whether departmental agents accumulate toward institutional-depth context organically through use (bottom-up), or whether institutional ontology must be deliberately engineered top-down, which is Palantir’s implicit bet. ↩︎

  6. A February 2026 NBER study [5] surveying nearly 6,000 executives across the US, UK, Germany, and Australia found that roughly 90% of firms report no measurable impact from AI on employment or productivity over the past three years, despite 69% actively using it. Apollo chief economist Torsten Slok updated Solow’s line directly: “AI is everywhere except in the incoming macroeconomic data.” See also [6] and [7]. ↩︎

  7. The micro-macro disconnect is sharp. Faros AI [8] found that high-AI-adoption teams complete 21% more tasks and merge 98% more pull requests, but PR review time increases 91%. Individual speedup, no organizational throughput. Meanwhile, Workday’s 2026 research [9] found that 37-40% of time supposedly saved by AI gets consumed reviewing and correcting AI-generated output. ↩︎

  8. Historical precedent from Man Group [10]: electricity required around 40 years to show aggregate productivity impact, computers took 25-35 years. The Yale Budget Lab [11] adds that current productivity data is noisy and may be measurement artifact, comparing revised jobs figures against unrevised GDP. ↩︎