<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>blocksec</title>
        <link>https://pub.blocksec.ca/</link>
        <description>Recent content on blocksec</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Sun, 05 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://pub.blocksec.ca/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>If the Harness Is the Product</title>
        <link>https://pub.blocksec.ca/posts/if-the-harness-is-the-product/</link>
        <pubDate>Sun, 05 Apr 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/posts/if-the-harness-is-the-product/</guid>
        <description>&lt;img src="https://pub.blocksec.ca/posts/if-the-harness-is-the-product/cover.png" alt="Featured image of post If the Harness Is the Product" /&gt;&lt;h2 id=&#34;the-incident&#34;&gt;The Incident
&lt;/h2&gt;&lt;p&gt;On March 31, 2026, Anthropic accidentally included a debugging file in a routine software update for Claude Code.&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt; A configuration oversight exposed the complete source code: 512,000 lines across 1,906 files, fully readable. Within hours the code was copied, shared thousands of times, and dissected by the developer community worldwide.&lt;/p&gt;
&lt;p&gt;Most of the analysis focused on features: codenames, a pet system, unreleased capabilities, internal tools. But a few analysts asked a different question. What users assumed was a thin interface that passes prompts to an AI model turned out to be a 512,000-line orchestration platform making hundreds of invisible decisions per session. As Han HELOIR Yan put it: the harness,&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt; not the model, is the product.&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;p&gt;That claim raises two questions: First, is it true? Is the orchestration layer genuinely where the value lives, or is this just infrastructure that any competent team could replicate? Second, and this is the question this note is about: if the harness is the product, what kind of product is it? What does it mean when a Fortune 10 company ships an opaque operating system between the developer and the model, treats its configuration as a trade secret, and calls it a coding assistant?&lt;/p&gt;
&lt;h2 id=&#34;the-harness-is-the-product&#34;&gt;The Harness Is the Product
&lt;/h2&gt;&lt;p&gt;Among the analyses that followed the leak, Yan&amp;rsquo;s stood out for asking the structural question rather than cataloguing features: if models are converging and available to anyone at commodity pricing, why does Claude Code alone generate $2.5 billion in annualized revenue? Yan&amp;rsquo;s answer is that the harness, not the model, is the product. Developers are paying for orchestration, not intelligence.&lt;/p&gt;
&lt;p&gt;I think Yan is right, and not just from the leaked code. I use Claude Code daily and not as a coding assistant but as a knowledge work environment, research tool, and operational platform. I have used Cursor, Windsurf, and the API directly. The model is the same or similar across all of them. The experience is not. What makes Claude Code different is everything the harness does before the model sees your prompt and after it returns a response. The gap between using the API raw and using it through a well-built harness is the difference between having an engine and having a car.&lt;/p&gt;
&lt;p&gt;The leaked source shows what that orchestration looks like at production scale. The resilience layer alone has five fallback stages for error recovery and five distinct strategies for compressing conversation history, each tuned to a different failure mode, because no single approach works well enough. Memory and context management runs deeper: twenty-eight layered instruction files are assembled dynamically based on user configuration and session state, a background system the code calls &amp;ldquo;dreaming&amp;rdquo; organizes project knowledge while the user is away, and conversation history is silently compressed or discarded to stay within the model&amp;rsquo;s working limits. On top of that sits a full orchestration layer: over sixty tools behind permission gates (eighteen invisible until the model searches for them), seven permission modes including a machine learning classifier that reads the conversation and decides what to allow, the ability to spawn parallel sub-agents, and a remote control system that lets the web interface operate the local application.&lt;/p&gt;
&lt;p&gt;This is not a wrapper around a model. Yan quotes a commenter who put it in terms the industry understood: the model is the dealer; the harness is the casino. Casinos are hard to build. The term &amp;ldquo;harness engineering&amp;rdquo; is already following &amp;ldquo;prompt engineering&amp;rdquo; and &amp;ldquo;context engineering&amp;rdquo; into the hype cycle, which is itself evidence of where practitioners see the value shifting.&lt;/p&gt;
&lt;h2 id=&#34;if-the-harness-is-the-product&#34;&gt;If the harness is the product
&lt;/h2&gt;&lt;p&gt;In one sentence: configuration should not be a Trade Secret for enterprise software.&lt;/p&gt;
&lt;p&gt;In enterprise software, configuration is what the customer is entitled to know because it defines the behavior of the product they are paying for. Cloud access policies, infrastructure definitions, deployment manifests: all visible and auditable. The harness treats its configuration as a trade secret. That is the norm violation, and everything else follows from it.&lt;/p&gt;
&lt;p&gt;The leaked code revealed what the harness adds between your prompt and the model: hidden instructions assembled from 15+ sections that shape every interaction.&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt; Decoy definitions injected into requests to prevent competitors from copying the model&amp;rsquo;s behavior, billed to the user. Frustration detection using pattern matching that adjusts behavior based on emotional state. Conversation history selectively compressed or discarded based on rules the user cannot see. Available tools filtered by vendor-controlled switches. A stealth mode that strips attribution when contributing to external projects. Invisible meta-messages injected to steer the model. The point is not whether any of these are justified. Each may have a reasonable rationale. The point is that none are disclosed.&lt;/p&gt;
&lt;p&gt;The harness functions as an operating system: it manages resources (the AI&amp;rsquo;s working memory and usage budget), handles input and output (tool execution, file operations), enforces permissions (seven modes, with multiple approval systems racing to respond), manages processes (sub-agents, background tasks), provides a command interface, and schedules work (recurring tasks, sleep/wake cycles, push notifications). An operating system is expected to serve the user.&lt;sup id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;5&lt;/a&gt;&lt;/sup&gt; Windows trained a generation to accept that the OS serves the OS maker: bundled Internet Explorer to kill Netscape, preinstalled Bing, telemetry everywhere, ads in the Start menu. The dangerous part was not that it happened but that it became normal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://pub.blocksec.ca/posts/if-the-harness-is-the-product/two-layer-opacity-stack.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Two-layer opacity stack&#34;
	
	
&gt;&lt;/p&gt;
&lt;p&gt;The asymmetry is structural. The model is measurable through benchmarks, evaluations, and public scrutiny. The harness is not: no benchmarks exist for how well it manages conversation history, how efficiently it compresses context, or how much overhead its hidden instructions add. Usage costs are controlled by the harness, not the user; a harness optimized for user efficiency and one optimized for revenue look identical from the outside. A good harness can mask a declining model; a bad harness can make a great model look mediocre. And once workflows embed around a specific harness through customizations, automation rules, configuration files, plugins, and integrations, switching costs make it hard to escape.&lt;/p&gt;
&lt;p&gt;The risk I am pointing at is not overt manipulation for Enterprise customers which usually have security teams, traffic inspection, contractual audit rights. The subtler risk is the slow drift of defaults toward the business model, where every individual default is defensible as a &amp;ldquo;product improvement&amp;rdquo; or &amp;ldquo;trade secret&amp;rdquo;.&lt;/p&gt;
&lt;h3 id=&#34;the-pattern-spreads&#34;&gt;The Pattern Spreads
&lt;/h3&gt;&lt;p&gt;The leak is not just Anthropic&amp;rsquo;s problem. It is now a reference architecture. Every AI tool builder (OpenAI, Google, Cursor, Windsurf, and the open source projects already rewriting the code in other languages) now has a detailed blueprint for one of the most sophisticated harnesses in production. Within days of the leak, developers had reverse-engineered the core patterns and begun replicating them.&lt;/p&gt;
&lt;p&gt;The good patterns propagate alongside the questionable ones. Automatic error recovery, context compression, tool orchestration, and on-demand capability loading are genuine engineering advances that will make every harness better. But injecting decoy content to block competitors now has a production reference implementation. Frustration detection and undisclosed behavioral adjustment has a template. Stealth mode has a working example. These will not spread as scandals. They will spread as &amp;ldquo;industry standard practices,&amp;rdquo; normalized by the fact that the leading company in the space shipped them.&lt;/p&gt;
&lt;p&gt;This is my main complaint: every hidden instruction, every decoy definition, every meta-message the harness injects into a request consumes tokens the customer pays for. At individual scale it&amp;rsquo;s noise. At enterprise scale (hundreds or thousands of seats, heavy daily usage) the overhead becomes a line item nobody budgeted for because nobody knew it was there. The customer is paying for tokens that serve the vendor&amp;rsquo;s interests (anti-competitive decoys, behavioral steering, attribution stripping) alongside the tokens that serve the customer&amp;rsquo;s work. There&amp;rsquo;s no way to distinguish the two on the invoice.&lt;/p&gt;
&lt;p&gt;The convergence risk is the real concern. If every harness converges on the same architectural patterns, including the provider-serving ones, users face the same invisible mediation regardless of which tool they choose. The competitive market that is supposed to discipline bad behavior instead standardizes it. You cannot switch away from opaque configuration if every alternative is equally opaque.&lt;/p&gt;
&lt;h3 id=&#34;where-this-leads&#34;&gt;Where This Leads
&lt;/h3&gt;&lt;p&gt;Models are becoming infrastructure, priced toward zero margin the way cloud compute did before them. The harness is becoming the product layer: where brand, UX, lock-in, and value extraction live. The game is now about who controls the layer between the developer and the model, which is the browser wars, mobile OS wars, and cloud platform wars playing out again in a new medium. Open source harnesses will matter more, not as philosophy but as a practical check against invisible quality degradation. The regulatory gap between &amp;ldquo;enterprise software norms exist&amp;rdquo; and &amp;ldquo;enterprise software norms are enforced for AI tools&amp;rdquo; will not last.&lt;/p&gt;
&lt;p&gt;Anthropic is at the center of this transition: valued at $380 billion (Feb 2026), with $14 billion in annualized revenue and a $30 billion funding round. That is Fortune 10 territory by valuation. The leaked codebase reflects this: brilliant orchestration engineering alongside undisclosed behavioral profiling, anti-competitive content injection billed to the user, and a gamification system complete with collectible virtual pets, rarity tiers, and role-playing game stats, all shipped to enterprise customers in the same package. Collectibles, engagement loops, weighted loot drops: these are consumer product techniques. They do not belong in enterprise tooling. No enterprise procurement team evaluates a tool and expects to find a companion pet system with collectible mechanics built in. The codebase reads as engineering-led product decisions without enterprise product management discipline: every feature answers &amp;ldquo;what would be cool to build&amp;rdquo; rather than &amp;ldquo;what should we ship to a customer paying us millions.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;AI-native companies like Anthropic and OpenAI followed the consumerization path into enterprise: individual developers adopted the product, teams followed, procurement formalized the relationship. The iPhone, Slack, and GitHub Copilot took the same route. But consumerization only covers the adoption path, not the accountability expectations. Once the enterprise contract is signed, the product lives under enterprise rules: data governance, configuration disclosure, auditability, compliance. The iPhone got managed by mobile device management. Slack got federal security certification. The internal culture of AI startups, however, has not made that transition. The product thinking remains consumer-grade (engagement, delight, cleverness) even as 80% of revenue comes from enterprise.&lt;/p&gt;
&lt;p&gt;This is where incumbents like Microsoft and Google have a structural advantage: they already operate within enterprise norms. Their AI harnesses (Copilot, Gemini Code Assist) inherit decades of enterprise product discipline, compliance infrastructure, and procurement muscle memory. They don&amp;rsquo;t need to learn what a CISO expects; they already know. The leaked Claude Code codebase is what happens when a research lab&amp;rsquo;s product culture meets enterprise scale without that transition. The collectible pet system is the visible symptom. If the product culture considered gamification mechanics acceptable in enterprise tooling, what else passed the same filter? That is the question that starts the audit.&lt;/p&gt;
&lt;p&gt;The leak itself changes the dynamic. Developers now know what to look for: invisible usage overhead, opaque context decisions, undisclosed behavioral modifications. That awareness does not go away, and it makes future degradation harder to execute undetected.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion
&lt;/h2&gt;&lt;p&gt;The leak accelerated a conversation about harnesses that was coming regardless. That part is good. The risk is what gets normalized alongside it: 512,000 lines of production code now serve as the reference implementation, and teams replicating the architecture will copy the opaque patterns alongside the brilliant ones. If every harness converges on the same undisclosed defaults, the market cannot correct what it cannot see. If a product culture considered collectible pets acceptable in enterprise tooling, what else passed the same filter? The leak answered the transparency question empirically. What the industry does with that answer is the only question left.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;references&#34;&gt;References
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Claude Code&amp;rsquo;s source code appears to have leaked: here&amp;rsquo;s what we know&lt;/a&gt;, leak incident reporting (VentureBeat, 2026-03-31)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://medium.com/data-science-collective/everyone-analyzed-claude-codes-features-nobody-analyzed-its-architecture-1173470ab622&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Everyone Analyzed Claude Code&amp;rsquo;s Features. Nobody Analyzed Its Architecture&lt;/a&gt;, architectural analysis arguing the harness is the product (Han HELOIR Yan, Medium, 2026-03-31)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://kotrotsos.medium.com/your-ai-coding-assistant-has-a-pet-dcb7db06f639&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Your AI Coding Assistant Has a Pet&lt;/a&gt;, the Buddy companion pet system with gacha mechanics (Marco Kotrotsos, Medium, 2026-03-31)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://kotrotsos.medium.com/inside-claude-codes-prompt-architecture-ca0803162d82&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Inside Claude Code&amp;rsquo;s Prompt Architecture&lt;/a&gt;, analysis of 28 prompt files and layered system prompt architecture (Marco Kotrotsos, Medium, 2026-04-01)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://kotrotsos.medium.com/the-claude-code-source-code-leak-8c991f0f0b73&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The Claude Code Source Code Leak: Things I Learned&lt;/a&gt;, comprehensive walkthrough of the leaked codebase (Marco Kotrotsos, Medium, 2026-04-01)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.cnbc.com/2026/02/12/anthropic-closes-30-billion-funding-round-at-380-billion-valuation.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Anthropic closes $30 billion funding round at $380 billion valuation&lt;/a&gt;, $14B ARR, $2.5B Claude Code ARR, 80% enterprise (CNBC, 2026-02-12)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://platform.claude.com/docs/en/agent-sdk/overview&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Agent SDK overview&lt;/a&gt;, Claude API documentation (Anthropic)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://mitchellh.com/writing/my-ai-adoption-journey&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;My AI Adoption Journey&lt;/a&gt;, where the term &amp;ldquo;harness&amp;rdquo; was named for this context (Mitchell Hashimoto, 2026-02-05)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://martinfowler.com/articles/harness-engineering.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Harness Engineering&lt;/a&gt;, harness engineering as a discipline (martinfowler.com)&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;See [1] for VentureBeat&amp;rsquo;s reporting on the leak incident.&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;The term &amp;ldquo;harness&amp;rdquo; was named for this context by Mitchell Hashimoto in February 2026 [8], and has since entered wider use [9]. The equestrian analogy is tempting &amp;ndash; model as horse, harness as reins &amp;ndash; but misleading. A horse is useful without a harness. A better analogy is a car engine: powerful, but useless without the chassis, transmission, steering, and instruments that make it drivable. An AI model accessed through its raw interface is similarly impractical for real work, which is precisely why the harness layer is where the product value lives. A harness does not need to be opaque, just provide the control surface. Anthropic&amp;rsquo;s is unfortunately opaque.&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;See Yan&amp;rsquo;s analysis [2].&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;The hidden system prompt is not just a configuration issue; it is a principal inversion. The user assumes they are directing the agent, but the system prompt ensures the agent serves the maker first. See [4] for the prompt architecture details.&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;
&lt;p&gt;The exit option exists in theory: connect to the AI model directly and build your own harness. That eliminates the second layer of principal inversion (the harness serving the maker). The first layer (the model serving the maker via training) remains, but is at least a known quantity. Note that Anthropic&amp;rsquo;s Agent SDK is not an exit; it is built on the same foundation as Claude Code and shares the same orchestration engine, tools, and permission system [7]. To truly escape the harness you would need the raw model interface, not the SDK. In practice, replicating what Claude Code does is a 512,000-line engineering problem. The leak proved that. The exit option is real for sophisticated teams; it is not real for most users, which is precisely why the transparency norms matter.&amp;#160;&lt;a href=&#34;#fnref:5&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>The Composable Provider Model</title>
        <link>https://pub.blocksec.ca/posts/composable-provider-model/</link>
        <pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/posts/composable-provider-model/</guid>
        <description>&lt;img src="https://pub.blocksec.ca/posts/composable-provider-model/cover.png" alt="Featured image of post The Composable Provider Model" /&gt;&lt;p&gt;Every interaction with a hosted LLM passes through the same structural layers: a use case, a harness that mediates between the user and the model, and the API itself. The harness assembles prompts, dispatches tool calls, manages context, and enforces behavioral constraints. Claude Code CLI, the Claude Desktop sandbox, the Anthropic SDK, and raw API access are all harnesses, each with different tradeoffs.&lt;/p&gt;
&lt;p&gt;The conventional framing treats the harness as the thing you configure. The composable provider model inverts this: instead of configuring the harness (which has limited knobs), you compose the world the harness sees. The harness reads &lt;code&gt;~/.claude/&lt;/code&gt;, resolves &lt;code&gt;.mcp.json&lt;/code&gt;, walks the filesystem, shells out to whatever is on &lt;code&gt;PATH&lt;/code&gt;. Every one of those is a provider-level seam. You do not touch the binary. You shape what it lands on.&lt;/p&gt;
&lt;p&gt;This has a structural consequence. Since the harness only needs its resource contract satisfied, the provider is substitutable. A host OS, a container, a gVisor sandbox, a nix shell can all fill the role. Reverse engineering of the Claude Desktop application confirms that Anthropic themselves build synthetic resource providers: the Desktop sandbox uses gVisor with 9P filesystem passthrough, constructing a virtual filesystem from host paths, mounted volumes, and sandboxed scratch space&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;The credential (OAuth token or API key) is the one resource that cannot be synthesized or composed. It is the provider layer&amp;rsquo;s single invariant, the root of trust that Anthropic holds the other end of. Everything else is a projection you control.&lt;/p&gt;
&lt;p&gt;With this model, the agent is not the harness. The agent is the recipe: a manifest specifying which &lt;code&gt;CLAUDE.md&lt;/code&gt; (behavioral instructions), which &lt;code&gt;.mcp.json&lt;/code&gt; (tool surface), which project files are visible, which binaries are on &lt;code&gt;PATH&lt;/code&gt;, and which environment variables are set. Assemble that into a directory and launch &lt;code&gt;claude&lt;/code&gt; into it. The harness boots, reads its environment, and becomes that agent.&lt;/p&gt;
&lt;p&gt;The full analysis, including the attestation gap in Anthropic&amp;rsquo;s credential model, the legal boundaries of filesystem composition under the ToS, the distinction between recipe-level and orchestration-level composition, and why the CLI under API key billing becomes a composable SDK, is available as a standalone document:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://pub.blocksec.ca/presentations/composable-provider-model/&#34; &gt;Read the full document: The Composable Provider Model&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;references&#34;&gt;References
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;Kotrotsos, Marco. &lt;a class=&#34;link&#34; href=&#34;https://kotrotsos.medium.com/two-versions-of-claude-desktop-e6de34426238&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&amp;ldquo;Two Versions of Claude Desktop&amp;rdquo;&lt;/a&gt;, Medium, March 14, 2026. Decompilation and comparative analysis documenting the yukonSilver VM sandbox architecture.&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://code.claude.com/docs/en/legal-and-compliance&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Claude Code: Legal and Compliance&lt;/a&gt;. Anthropic. Authentication and credential use restrictions.&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://platform.claude.com/docs/en/agent-sdk/overview&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Claude Agent SDK Overview&lt;/a&gt;. Anthropic. The SDK as the CLI&amp;rsquo;s internals extracted as a library.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;Kotrotsos&amp;rsquo;s decompilation of Claude Desktop v1.1.4088 and v1.1.6452 revealed gVisor with 9P filesystem passthrough, minimal Linux boot images, and on-demand Ubuntu bundles. The harness cannot distinguish whether files came from a real partition or a 9P passthrough.&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>The Economic Anomaly of AI Agents</title>
        <link>https://pub.blocksec.ca/posts/the-economic-anomaly-of-ai-agents/</link>
        <pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/posts/the-economic-anomaly-of-ai-agents/</guid>
        <description>&lt;img src="https://pub.blocksec.ca/posts/the-economic-anomaly-of-ai-agents/cover.png" alt="Featured image of post The Economic Anomaly of AI Agents" /&gt;&lt;p&gt;I am not an economist. But Coase&amp;rsquo;s &amp;ldquo;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/The_Nature_of_the_Firm&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Nature of the Firm&lt;/a&gt;&amp;rdquo; has fascinated me for years, especially as outsourcing, SaaS, and cloud infrastructure have kept redrawing the line between what you build and what you buy. I have been building with AI agents long enough to notice that a contextualized agent, a Large Language Model (LLM) API layered with system prompts, repos, memory, and operational history, does not sit on either side of that line. The practical question is simple: can you source this thing from the market, or does it only exist if you build it? I went looking for the framework that answers that, and could not find one. What follows is my attempt to work through the economics I do know, show where each framework breaks down, and trace what the gap might mean.&lt;/p&gt;
&lt;h2 id=&#34;economic-foundations&#34;&gt;Economic Foundations
&lt;/h2&gt;&lt;p&gt;Several economic frameworks are relevant to understanding AI agents.&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt; Each captures part of the picture and none captures the whole.&lt;/p&gt;
&lt;h3 id=&#34;ronald-coase-and-transaction-cost-theory&#34;&gt;Ronald Coase and Transaction Cost Theory
&lt;/h3&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Ronald_Coase&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Ronald Coase&lt;/a&gt;&amp;rsquo;s 1937 paper &amp;ldquo;The Nature of the Firm&amp;rdquo; asked a deceptively simple question: if markets are efficient, why do firms exist at all? His answer was &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Transaction_cost&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;transaction costs&lt;/a&gt;. Using the market to coordinate work involves real friction: finding suppliers, negotiating terms, enforcing contracts, transferring knowledge. When those costs exceed the overhead of doing the work internally, you bring it inside the firm. The boundary of the firm sits where the marginal cost of internal coordination equals the marginal cost of market transactions.&lt;/p&gt;
&lt;p&gt;Coase&amp;rsquo;s framework emerged in a world of physical assets and early services. The things being transacted were rivalrous and excludable. If you hire a worker, that worker is unavailable to others. The entire logic of the firm boundary depends on scarcity.&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;h3 id=&#34;oliver-williamson-and-asset-specificity&#34;&gt;Oliver Williamson and Asset Specificity
&lt;/h3&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Oliver_E._Williamson&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Oliver Williamson&lt;/a&gt; extended Coase with the concept of &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Asset_specificity&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;asset specificity&lt;/a&gt;. When an investment becomes valuable only in the context of a particular relationship, market transactions become dangerous because either party can exploit the dependency. A factory tooled for one buyer&amp;rsquo;s custom parts cannot easily serve other buyers. This lock-in effect strengthens the case for &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Vertical_integration&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;vertical integration&lt;/a&gt;: bringing the specific asset inside the firm to protect against opportunistic exploitation.&lt;/p&gt;
&lt;p&gt;The key insight is that the more specialized an asset is, the more likely it will be governed within a firm rather than through market contracts.&lt;/p&gt;
&lt;h3 id=&#34;hal-varian-carl-shapiro-and-information-goods-economics&#34;&gt;Hal Varian, Carl Shapiro, and Information Goods Economics
&lt;/h3&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Hal_Varian&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Hal Varian&lt;/a&gt; and &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Carl_Shapiro&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Carl Shapiro&lt;/a&gt;&amp;rsquo;s work on information economics describes goods with high fixed costs and near-zero marginal costs. Creating the first copy of a piece of software, a book, or a database is expensive. Every subsequent copy is essentially free. This inverts classical pricing: the marginal cost pricing that works for physical goods would drive the price to zero. So &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Information_good&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;information goods&lt;/a&gt; rely on differentiation, bundling, versioning, and lock-in.&lt;/p&gt;
&lt;p&gt;Their framework, laid out in &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Information_Rules&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Information Rules&lt;/a&gt;, explains why software &amp;ldquo;eats everything&amp;rdquo;: once the fixed cost is paid, distribution is near-free, and the producer who captures the market first can amortize that fixed cost across millions of users.&lt;/p&gt;
&lt;h3 id=&#34;robert-solow-and-the-productivity-paradox&#34;&gt;Robert Solow and the Productivity Paradox
&lt;/h3&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Robert_Solow&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Robert Solow&lt;/a&gt; observed in 1987 that &amp;ldquo;you can see the computer age everywhere but in the productivity statistics.&amp;rdquo; Computers were obviously transforming work, yet measured productivity remained flat. The &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Solow_computer_paradox&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;paradox&lt;/a&gt; was not that computers did nothing, but that the measurement tools, designed for an industrial economy of countable physical outputs, could not capture what computers were actually doing. Output per worker-hour is clean when output is widgets. When output shifts to documents, decisions, coordination, and knowledge, the metric goes blind.&lt;/p&gt;
&lt;p&gt;The deeper issue: computers did not just accelerate existing work. They changed what work was. Spreadsheets made iteration free, enabling analysis that nobody would have attempted on paper. Email did not replace memos; it restructured decision-making. The productivity was not in the tool but in the organizational transformation it eventually forced, a transformation that took decades to manifest and even longer to measure.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-economic-anomaly-of-the-ai-agent&#34;&gt;The Economic Anomaly of the AI Agent
&lt;/h2&gt;&lt;p&gt;An AI agent, understood as an LLM API contextualized with accumulated environment-specific knowledge (system prompts, repos, memory files, lessons, credential scopes, operational history), seems to be a new kind of economic entity. Each of its properties is individually known from other domains. The combination is genuinely novel.&lt;/p&gt;
&lt;h3 id=&#34;a-commodity-that-becomes-non-fungible-through-use&#34;&gt;A commodity that becomes non-fungible through use
&lt;/h3&gt;&lt;p&gt;The LLM API is a utility, identical for everyone, metered by the token, with minimal switching cost. It is electricity. But the moment context accumulates on top of it, a CLAUDE.md, a repo, weeks of lessons, environmental knowledge, credential scopes, the thing stops being a commodity. It is electricity running through a specific machine that was built over weeks and cannot be replicated from a catalog. The commodity is the input. The output is bespoke. There is no existing economic category for a commodity that becomes non-fungible through use.&lt;/p&gt;
&lt;h3 id=&#34;software-that-behaves-like-a-practice&#34;&gt;Software that behaves like a practice
&lt;/h3&gt;&lt;p&gt;Traditional software is a defined artifact. It can be versioned, shipped, installed, audited, hashed. An agent is a probability distribution narrowed by context, producing different behavior every run. It is not an executable but a trajectory through latent space shaped by accumulated artifacts. The artifacts themselves are software, yet the thing they produce when fed to the model is not, in any traditional sense. It is closer to a practice than a product: a body of knowledge in action, not a static binary.&lt;/p&gt;
&lt;h3 id=&#34;free-to-copy-costs-money-to-use&#34;&gt;Free to copy, costs money to use
&lt;/h3&gt;&lt;p&gt;The marginal cost of duplicating the agent&amp;rsquo;s knowledge (its context, repos, memory) is zero. The marginal cost of the agent doing anything is positive and metered per token. This is like duplicating a factory for free while still paying for electricity every time it runs. No physical good works this way: the capital cost is zero while the operating cost is nonzero. Every copy is simultaneously free and expensive depending on whether it is sitting or working.&lt;/p&gt;
&lt;h3 id=&#34;meta-software-substrate-medium-product-environment&#34;&gt;Meta-software: substrate, medium, product, environment
&lt;/h3&gt;&lt;p&gt;The agent does not just run as software; it also produces, reads, modifies, and reasons about software. Software is simultaneously the agent&amp;rsquo;s medium, its product, its environment, and its own substrate. A traditional SaaS product sits in a category: a tool that does a thing. An agent is a tool that makes tools, including potentially better versions of itself. There is no stable economic category for a good that produces more of the kind of thing it is.&lt;/p&gt;
&lt;h3 id=&#34;the-combined-anomaly&#34;&gt;The combined anomaly
&lt;/h3&gt;&lt;p&gt;These four properties together yield an entity that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Runs on a commodity but is not one&lt;/li&gt;
&lt;li&gt;Looks like software but behaves like a practice&lt;/li&gt;
&lt;li&gt;Is free to copy but costs money to use&lt;/li&gt;
&lt;li&gt;Consumes, produces, and modifies its own substrate&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is not a good, not a service, not capital, not labor. It has properties of all four simultaneously. Coase cannot model it because it is not a transaction. Williamson cannot model it because the asset specificity is real but the asset is infinitely duplicable. Varian cannot model it because it is not a static information good; it changes through use.&lt;/p&gt;
&lt;p&gt;In &lt;a class=&#34;link&#34; href=&#34;https://pub.blocksec.ca/posts/when-coase-met-turing/&#34; &gt;When Coase Met Turing&lt;/a&gt; [1], I built a reaction-diffusion visualization that lets you watch firm boundaries form from competing cost gradients. The digital rupture could be modeled there by pushing coordination range toward non-local. AI agents break the model entirely: the pattern starts modifying its own substrate.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://pub.blocksec.ca/posts/the-economic-anomaly-of-ai-agents/economic_anomaly_agent_layers.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;The economic anomaly of the AI agent&#34;
	
	
&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;implications-and-projections&#34;&gt;Implications and Projections
&lt;/h2&gt;&lt;h3 id=&#34;the-coasean-firm-boundary-shifts&#34;&gt;The Coasean firm boundary shifts
&lt;/h3&gt;&lt;p&gt;If an agent accumulates institutional knowledge and that knowledge is duplicable at near-zero cost, the fixed cost of internal software production drops dramatically. The Cloudflare/Vinext case [2] (one engineer, one week, $1,100 in tokens to reimplement the Next.js API surface) is an early proof point. Work that was previously &amp;ldquo;buy&amp;rdquo; because the fixed cost was prohibitive becomes &amp;ldquo;build&amp;rdquo; because the fixed cost collapsed. The Coasean boundary moves inward. SaaS products that survive on the assumption that internal development is permanently expensive are running on a clock.&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;p&gt;The shift is not one-time. The more you build internally with agents, the cheaper the next build becomes, because the agent&amp;rsquo;s accumulated knowledge carries forward. This is a positive feedback loop eroding the buy case over time.&lt;/p&gt;
&lt;h3 id=&#34;asset-specificity-without-lock-in-cost&#34;&gt;Asset specificity without lock-in cost
&lt;/h3&gt;&lt;p&gt;Williamson&amp;rsquo;s framework assumes that acquiring a specific asset is expensive each time. When duplication is free, you get specificity (the agent&amp;rsquo;s knowledge is valuable only in your context) without the normal acquisition cost for additional instances. This breaks the trade-off Williamson described.&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt; You can have as many specialists as you want without the cost that normally constrains specialization.&lt;/p&gt;
&lt;p&gt;The binding constraint shifts from acquisition cost to the principal&amp;rsquo;s coordination capacity: the attention required to direct, evaluate, and integrate the work of multiple agents.&lt;sup id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;5&lt;/a&gt;&lt;/sup&gt; This is Coase&amp;rsquo;s other prediction, that firm size is limited by the entrepreneur&amp;rsquo;s ability to coordinate, manifesting in a regime he never imagined.&lt;/p&gt;
&lt;h3 id=&#34;the-measurement-problem-recurs&#34;&gt;The measurement problem recurs
&lt;/h3&gt;&lt;p&gt;The Solow paradox is replaying.&lt;sup id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;6&lt;/a&gt;&lt;/sup&gt; Enterprises deploy agents and measure throughput: tickets closed, hours saved, FTE equivalents.&lt;sup id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;7&lt;/a&gt;&lt;/sup&gt; Those metrics capture the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Scientific_management&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Taylorist&lt;/a&gt; surface and miss everything underneath. An agent that accumulates institutional knowledge, makes subsequent builds cheaper, and enables duplication of specialists at zero marginal cost produces value that no existing productivity metric can express.&lt;/p&gt;
&lt;p&gt;The deeper value, compound knowledge, environmental specialization, heritable expertise, is invisible to the measurement tools. So it does not get funded, does not get studied, does not get built deliberately. It only emerges accidentally in setups where someone runs agents long enough and pays close enough attention to notice.&lt;/p&gt;
&lt;p&gt;What cannot be measured does not exist in an enterprise budget.&lt;sup id=&#34;fnref:8&#34;&gt;&lt;a href=&#34;#fn:8&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;8&lt;/a&gt;&lt;/sup&gt; This may be the largest obstacle to realizing the non-Taylorist potential of AI agents.&lt;/p&gt;
&lt;h3 id=&#34;capital-markets-cannot-price-the-contextualized-agent&#34;&gt;Capital markets cannot price the contextualized agent
&lt;/h3&gt;&lt;p&gt;Markets price by analogy. SaaS gets a revenue multiple. Headcount replacement gets an ROI calculation. Infrastructure gets a capex depreciation model. The LLM API layer can be priced because it is a metered service. But the contextualized agent, the thing that sits on top, has no analogue. It is not SaaS (not a defined product), not consulting (does not bill hours), not an employee (zero marginal duplication), not infrastructure (produces different things each time).&lt;/p&gt;
&lt;p&gt;Analysts default to whichever analogy their model supports: automation software, staff augmentation, developer tooling. Each captures a fraction and misses the rest. The inability to categorize leads to the inability to value, which leads to systematic underinvestment in the non-commodity layer where most of the actual value accumulates.&lt;/p&gt;
&lt;h3 id=&#34;early-days&#34;&gt;Early days
&lt;/h3&gt;&lt;p&gt;All of this is observation from a narrow window. The models are changing quarterly. The context mechanisms are changing faster. The institutional response has barely started. Every claim in this post has a shelf life, and some of them may look naive within a year.&lt;/p&gt;
&lt;p&gt;But the core anomaly, a commodity that becomes non-fungible through use and then modifies its own substrate, is structural. It does not depend on which model is frontier this quarter or which vendor wins the enterprise market. If the pattern holds, the economics will eventually catch up with a framework that handles it. Until then, the practitioners who are building these things are running ahead of the theory that is supposed to explain what they are doing.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;references&#34;&gt;References
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://pub.blocksec.ca/posts/when-coase-met-turing/&#34; &gt;When Coase Met Turing&lt;/a&gt;, companion essay on economic morphogenesis and reaction-diffusion firm boundaries&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://blog.cloudflare.com/vinext/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Cloudflare/Vinext&lt;/a&gt;, one engineer reimplementing the Next.js API surface in a week for $1,100 in tokens&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The SaaSpocalypse&lt;/a&gt; (TechCrunch, March 2026)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://techcrunch.com/2026/02/24/anthropic-launches-new-push-for-enterprise-agents-with-plugins-for-finance-engineering-and-design/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Anthropic&amp;rsquo;s enterprise agents program&lt;/a&gt; (TechCrunch, February 2026), Cowork plug-ins for finance, legal, HR&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.nber.org/papers/w34836&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;AI Adoption and AI Exposure&lt;/a&gt; (NBER Working Paper 34836, February 2026), survey of ~6,000 executives across four countries&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The AI productivity paradox&lt;/a&gt; (Fortune, February 2026)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.frbsf.org/research-and-insights/publications/economic-letter/2026/02/ai-moment-possibilities-productivity-policy/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;An AI Moment: Possibilities, Productivity, Policy&lt;/a&gt; (San Francisco Fed, February 2026)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.faros.ai/blog/ai-software-engineering&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The Impact of AI on Software Engineering&lt;/a&gt; (Faros AI), study across 10,000 developers&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.platformer.news/ai-productivity-paradox-metr-pwc-workday/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The AI productivity paradox&lt;/a&gt; (Platformer), Workday&amp;rsquo;s 2026 findings on AI review overhead&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.man.com/insights/the-productivity-paradox&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The Productivity Paradox&lt;/a&gt; (Man Group), historical precedent from electricity and computers&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://budgetlab.yale.edu/research/ai-productivity-boom-dont-count-your-productivity-data-chickens&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Don&amp;rsquo;t Count Your Productivity Data Chickens&lt;/a&gt; (Yale Budget Lab), measurement noise in current AI productivity data&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;Specifically, LLM-based AI agents with deep context encoding hard to source knowledge.&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;Coase&amp;rsquo;s transaction cost logic and its implications for consulting and strategy mapping are threads I explore separately.&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;This is tied to the so-called SaaSpocalypse [3].&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;This holds for the fully contextualized agent built from scratch. However, a middle category is emerging: pre-contextualized agents that ship with domain-generic skills and are designed to accumulate firm-specific context through enterprise use. Anthropic&amp;rsquo;s enterprise agents program [4] (Cowork plug-ins for finance, legal, HR, with private marketplaces and controlled data flows) is the clearest example. These reintroduce a form of lock-in that operates not at the model layer but at the harness-to-context integration layer: the customizations a firm builds on a particular platform&amp;rsquo;s plug-in architecture are practically coupled to that harness, even if the underlying model remains swappable. Williamson&amp;rsquo;s asset specificity reasserts itself one level up the stack.&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;
&lt;p&gt;Palantir has been monetizing this exact constraint for over a decade. Their Forward Deployed Engineers build ontologies, structured representations of a firm&amp;rsquo;s entities, relationships, and decision logic, on top of a commodity platform (Foundry, now AIP). The ontology is what converts the generic harness into a non-fungible institutional asset. This reveals a scope gradient in the commodity-to-bespoke transition: Anthropic&amp;rsquo;s Cowork plug-ins [4] operate at departmental scope, while Palantir&amp;rsquo;s ontologies encode cross-functional entity models and decision structures at institutional scope. The deeper the contextualization, the higher the lock-in and the wider the margin. The open question is whether departmental agents accumulate toward institutional-depth context organically through use (bottom-up), or whether institutional ontology must be deliberately engineered top-down, which is Palantir&amp;rsquo;s implicit bet.&amp;#160;&lt;a href=&#34;#fnref:5&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;
&lt;p&gt;A February 2026 NBER study [5] surveying nearly 6,000 executives across the US, UK, Germany, and Australia found that roughly 90% of firms report no measurable impact from AI on employment or productivity over the past three years, despite 69% actively using it. Apollo chief economist Torsten Slok updated Solow&amp;rsquo;s line directly: &amp;ldquo;AI is everywhere except in the incoming macroeconomic data.&amp;rdquo; See also [6] and [7].&amp;#160;&lt;a href=&#34;#fnref:6&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;
&lt;p&gt;The micro-macro disconnect is sharp. Faros AI [8] found that high-AI-adoption teams complete 21% more tasks and merge 98% more pull requests, but PR review time increases 91%. Individual speedup, no organizational throughput. Meanwhile, Workday&amp;rsquo;s 2026 research [9] found that 37-40% of time supposedly saved by AI gets consumed reviewing and correcting AI-generated output.&amp;#160;&lt;a href=&#34;#fnref:7&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:8&#34;&gt;
&lt;p&gt;Historical precedent from Man Group [10]: electricity required around 40 years to show aggregate productivity impact, computers took 25-35 years. The Yale Budget Lab [11] adds that current productivity data is noisy and may be measurement artifact, comparing revised jobs figures against unrevised GDP.&amp;#160;&lt;a href=&#34;#fnref:8&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>When Coase Met Turing</title>
        <link>https://pub.blocksec.ca/posts/when-coase-met-turing/</link>
        <pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/posts/when-coase-met-turing/</guid>
        <description>&lt;img src="https://pub.blocksec.ca/posts/when-coase-met-turing/cover.png" alt="Featured image of post When Coase Met Turing" /&gt;&lt;p&gt;In 1937, Ronald Coase asked a question that economics had somehow never asked: &lt;em&gt;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/The_Nature_of_the_Firm&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;why do firms exist at all?&lt;/a&gt;&lt;/em&gt; Why is the economy not one giant firm, or pure atomized exchange between individuals? His answer: there are two competing cost gradients. Coordination inside a firm gets cheaper per unit as you share context and routine. But organizational overhead (management, reporting, principal-agent friction) rises faster as the firm grows. The boundary of the firm is where these two gradients cross.&lt;/p&gt;
&lt;p&gt;In 1952, Alan Turing asked an equivalent question in biology: &lt;em&gt;&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Turing_pattern&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;why do spots form on animal skin?&lt;/a&gt;&lt;/em&gt; Why is the skin not uniformly one color? His answer: two chemicals interact. One reinforces itself locally (the activator). The other suppresses at a distance and diffuses faster (the inhibitor). When their diffusion rates differ enough, bounded patterns emerge spontaneously from a uniform substrate. No blueprint. No designer. Just two forces with different ranges.&lt;/p&gt;
&lt;p&gt;These are the same answer.&lt;/p&gt;
&lt;p&gt;I am not a mathematician or an economist. I am someone who thinks about organizations and noticed that Coase&amp;rsquo;s boundary and Turing&amp;rsquo;s boundary are drawn by the same mechanism. This essay is that observation, carried as far as intuition takes it.&lt;/p&gt;
&lt;h2 id=&#34;two-forces-one-pattern&#34;&gt;Two Forces, One Pattern
&lt;/h2&gt;&lt;p&gt;The mapping is direct.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Coordination gain&lt;/strong&gt; is Turing&amp;rsquo;s activator. When two activities are co-managed inside a firm, they share context, routines, tacit knowledge, and trust. This benefit reinforces locally: the more you coordinate, the more there is to coordinate. But it is inherently short-range. It depends on proximity, shared language, and working relationships that do not travel well.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Organizational overhead&lt;/strong&gt; is Turing&amp;rsquo;s inhibitor. Every additional activity inside a firm increases management burden. Meetings multiply. Reporting chains lengthen. Principal-agent friction compounds. And critically, this cost propagates faster and further than the coordination benefit. One new hire adds overhead for everyone in the reporting chain, not just the people they directly work with.&lt;/p&gt;
&lt;p&gt;When the inhibitor diffuses faster than the activator, bounded patterns emerge. In biology, those patterns are spots on a cheetah, stripes on a zebra, or patches on a giraffe. In economics, those patterns are &lt;strong&gt;firms&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The boundary of the firm is not a decision someone makes. It is where coordination gain drops below the cost of organizational overhead, the edge of the Turing spot. The firm does not get designed. It crystallizes.&lt;/p&gt;
&lt;h2 id=&#34;the-skin&#34;&gt;The Skin
&lt;/h2&gt;&lt;p&gt;In Turing&amp;rsquo;s model, patterns form on a substrate, a field of cells with specific material properties that determine how fast each chemical can spread. Change the substrate, change the pattern.&lt;/p&gt;
&lt;p&gt;The economic substrate is everything through which coordination and overhead propagate: institutional infrastructure, economic conditions, and regulatory regimes. Money, contracts, accounting standards, corporate personhood, competition law, labor regulation, trade policy. These are not features of any individual firm. They are properties of the medium that all firms form on. This is the &amp;ldquo;skin&amp;rdquo; on which economic patterns form.&lt;/p&gt;
&lt;p&gt;And this skin has a developmental history:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Barter&lt;/strong&gt; is essentially no skin. Direct exchange, no mediating layer. No medium through which either coordination or overhead can propagate beyond the immediate transaction. No firms form.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Money&lt;/strong&gt; creates the first real skin. Value can flow. A surplus in one transaction can feed another. But it is thin: coordination travels a little further, but there is no infrastructure yet for overhead to propagate through. You get faint clustering: households, small workshops.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Contract law&lt;/strong&gt; thickens the skin. Agreements are enforceable across time and between strangers. Coordination range extends. But contracts simultaneously provide the first propagation channel for the inhibitor: obligations accumulate, disputes require adjudication, compliance requires monitoring. The Turing instability condition begins to be satisfied. Real firms nucleate.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Double-entry bookkeeping&lt;/strong&gt; is a phase change. Overhead suddenly has a high-fidelity, low-friction channel. You can track costs and performance across large organizational spans. The inhibitor&amp;rsquo;s diffusion rate jumps. This is why the modern firm emerges when accounting technology matures: the skin acquired the material properties needed to support larger, sharper spots.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Corporate personhood&lt;/strong&gt; makes the spot self-sustaining. Before it, firms dissolved when their principal died. After it, the pattern persists independently of any individual. The skin remembers the spot.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Regulatory regimes&lt;/strong&gt; are external constraints on the pattern itself. Antitrust caps spot size. Licensing sets a minimum activation threshold for spot nucleation. Labor and environmental regulation modify the inhibitor&amp;rsquo;s properties (they add overhead that scales with firm activity). Securities regulation changes how capital flows through the substrate.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each institutional layer does not just enable new firms. It changes what kinds of patterns are possible at all.&lt;/p&gt;
&lt;p&gt;And the skin never gets thinner. It only accumulates layers. A modern economy cannot revert to bazaar morphology because the inhibitor channels are too well developed. De-patterning only happens through institutional collapse: the skin thins, firms dissolve, and the economy falls back to whatever pattern the remaining substrate supports. This is what you observe in failed states.&lt;/p&gt;
&lt;h2 id=&#34;technology-changes-the-skin&#34;&gt;Technology Changes the Skin
&lt;/h2&gt;&lt;p&gt;Technology is not the skin. It modifies the skin&amp;rsquo;s material properties. I use the term &lt;em&gt;lubrication&lt;/em&gt; for this&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;: technology does not create the pattern. It changes how easily the forces move through the skin.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;printing press&lt;/strong&gt; raised both diffusion rates, but not equally. Codified knowledge (accounting methods, legal codes, management procedure) benefits most from mass reproduction. That is inhibitor infrastructure. Coordination gain, which depends on tacit knowledge and trust, benefited less. Print sharpened firm boundaries and enabled larger spots.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;telegraph and railroad&lt;/strong&gt; extended coordination range without proportionally increasing overhead propagation. A manager in New York could coordinate with Chicago in real time, but the bureaucratic cost of managing that distance did not shrink. Result: bigger spots, same morphology. This is the era of the giant vertically integrated firm.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;shipping container&lt;/strong&gt; did something similar in the physical dimension. Cheaper transport extended the activator&amp;rsquo;s range. Firms stretched geographically. Multinational corporations became possible.&lt;/p&gt;
&lt;p&gt;Every one of these technologies modified diffusion rates but &lt;strong&gt;preserved locality&lt;/strong&gt;. Both forces still attenuated with distance. The governing equation stayed the same. Pattern scale changed. Pattern class did not. A cheetah with larger spots is still a cheetah.&lt;/p&gt;
&lt;h2 id=&#34;the-digital-rupture&#34;&gt;The Digital Rupture
&lt;/h2&gt;&lt;p&gt;Digital technology breaks locality itself.&lt;/p&gt;
&lt;p&gt;An API call from Sao Paulo to Dublin costs the same as one from the next building. For any activity that can be digitally mediated, coordination gain no longer attenuates with distance. The activator goes non-local.&lt;/p&gt;
&lt;p&gt;But overhead does not fully follow. Legal jurisdiction is still local. Labor regulation is still local. Management attention is still local: a human can only hold so many reporting relationships regardless of bandwidth. Cultural friction is still local. Time zones are still local.&lt;/p&gt;
&lt;p&gt;This is not &amp;ldquo;better technology.&amp;rdquo; It is a qualitatively different substrate. The activator is no longer governed by local diffusion. It reaches everywhere. The inhibitor is still partly local.&lt;/p&gt;
&lt;p&gt;And this decoupling produces fundamentally different pattern classes:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Plateaus with sharp boundaries.&lt;/strong&gt; Local Turing spots taper off smoothly. Non-local systems produce flat-topped territories with steep edges. Inside the platform: near-total dominance. Outside: near-zero presence. You are in the ecosystem or you are not. This is how platforms actually behave.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Winner-take-all condensation.&lt;/strong&gt; When activation reaches everywhere, the longest-wavelength mode dominates. The pattern condenses into one or very few global-scale spots. Google in search. The substrate cannot support multiple spots at that scale.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Spots with internal substructure.&lt;/strong&gt; In local Turing systems, spots are featureless blobs. In non-local systems, a large spot sustains its own internal pattern formation. Amazon is a spot that contains a marketplace, a ranking system, sub-ecosystems. Structure inside structure. Nested morphogenesis.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bimodal size distribution.&lt;/strong&gt; Many small spots coexisting with very few enormous ones. Platforms plus micro-firms, with mid-size firms hollowing out. This is not an anomaly to be corrected. It is the expected morphology when one diffusion rate escapes locality and the other does not.&lt;/p&gt;
&lt;p&gt;The current economy is living on a skin that is part local and part non-local. Activities that can be digitally mediated live on non-local skin. Activities requiring physical presence still live on local skin. The boundary between those two zones is where the most interesting economic turbulence is happening right now.&lt;/p&gt;
&lt;h2 id=&#34;the-visualization&#34;&gt;The Visualization
&lt;/h2&gt;&lt;p&gt;I built an &lt;a class=&#34;link&#34; href=&#34;https://blocksecca.github.io/turing-coase/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;interactive visualization&lt;/a&gt; that runs a &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_system&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Gray-Scott reaction-diffusion model&lt;/a&gt;, the same class of system Turing described, with economic labels on the parameters. You can drag sliders and watch economic morphology change in real time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Things to try:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start with &amp;ldquo;Artisan.&amp;rdquo;&lt;/strong&gt; You see many small, well-separated spots, firms of similar size in a competitive market. Now slowly drag Coordination Range to the right. Watch the spots merge into larger structures. You are watching what the internet did to market structure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start with &amp;ldquo;Artisan&amp;rdquo; again.&lt;/strong&gt; This time, increase New Opportunity Rate. The clean spots connect into labyrinthine chains, into guilds.&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt; You just rediscovered the guild system from first principles.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start with &amp;ldquo;Industrial.&amp;rdquo;&lt;/strong&gt; Push Organizational Decay toward &amp;ldquo;Fragile.&amp;rdquo; Watch firms dissolve as institutional memory fails.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start with &amp;ldquo;Platform.&amp;rdquo;&lt;/strong&gt; Increase Overhead Reach toward &amp;ldquo;Pervasive.&amp;rdquo; Large territories fragment as regulation bites.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start with &amp;ldquo;Barter.&amp;rdquo;&lt;/strong&gt; Increase New Opportunity Rate. Watch when firms first nucleate from the empty substrate. That is the Coasean instability threshold.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The parameter-space map on the right shows where you are and what regime transitions look like. The boundaries between pattern types are sharp: you do not smoothly interpolate between spots and stripes. You jump. This is why economic transitions are discontinuous.&lt;/p&gt;
&lt;p&gt;You can replay five centuries of economic history in four slider moves&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;: start at Artisan (spots), increase opportunity (labyrinths/guilds), increase coordination range (monopoly condensation), then increase overhead reach (spots re-emerge at larger scale as modern firms).&lt;/p&gt;
&lt;h2 id=&#34;what-this-is-and-what-it-isnt&#34;&gt;What This Is and What It Isn&amp;rsquo;t
&lt;/h2&gt;&lt;p&gt;This is an observation of structural correspondence between two well-established formal systems. It is not a proof of isomorphism.&lt;/p&gt;
&lt;p&gt;A full formalization would require constructing an explicit activity space with a well-defined metric, deriving reaction kinetics from economic first principles, and proving that the Coasean boundary falls out as the zero-level set of the activator in the patterned steady state. That is a research program, not an essay.&lt;/p&gt;
&lt;p&gt;What the observation gives you without the formalization is a way of &lt;em&gt;seeing&lt;/em&gt; economic morphology as pattern formation. It reframes questions about firms, markets, platforms, and regulation in terms that make the underlying dynamics visible:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Antitrust&lt;/strong&gt; maps to two distinct interventions with different morphological outcomes.&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt; Breaking up firms (increasing decay rate) produces many small fragile spots that may recondense. Behavioral regulation (increasing inhibitor diffusion) changes the equilibrium itself.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Business cycles&lt;/strong&gt; look like metastable-state transitions.&lt;sup id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;5&lt;/a&gt;&lt;/sup&gt; A boom is not firms getting bigger; it is spots connecting into labyrinths. A bust is the labyrinth fragmenting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Guilds&lt;/strong&gt; are a distinct pattern class (labyrinths), not primitive firms. They emerge when opportunity is high but coordination is local.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The mid-size firm hollowing out&lt;/strong&gt; is not a market failure. It is the expected morphology for a substrate with non-local activation and partly-local inhibition.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The math may follow. The insight does not need it.&lt;/p&gt;
&lt;h2 id=&#34;prior-work&#34;&gt;Prior Work
&lt;/h2&gt;&lt;p&gt;Nobody seems to have made exactly this mapping, but the pieces exist across several disconnected literatures. Each touches part of the structure without assembling the whole.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://doi.org/10.1086/261763&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Krugman&amp;rsquo;s New Economic Geography&lt;/a&gt;&lt;/strong&gt; (1991 onward) is the closest body of work. His core-periphery model frames the spatial distribution of economic activity as a tug of war between agglomeration forces (market size effects, labor pooling) and spreading forces (immobile factors, transport costs). That is structurally a reaction-diffusion instability analysis, and Krugman himself was aware of the formal parallel. But he worked in a discrete two-region framework (core vs. periphery), not in continuous space, and he never mapped it to Turing morphogenesis. His activator is agglomeration; his inhibitor is dispersion. The math is close but the biological pattern-formation language is absent. Critically, his question is &amp;ldquo;given this landscape, where do spots form?&amp;rdquo; not &amp;ldquo;why do spots exist as a morphological class?&amp;rdquo; The answer is always contingent on substrate features: change the coastline, move the river, and the pattern shifts. This essay asks a question that is prior to geography.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://doi.org/10.1140/epjb/e2009-00025-7&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Helbing (2009)&lt;/a&gt;&lt;/strong&gt; got closer to the explicit connection. He demonstrated that asymmetrical diffusion can drive social, economic, and biological systems into unstable regimes through pattern-formation instability, even from homogeneous initial conditions. His work frames this in terms of game-theoretic payoffs rather than firm boundaries, but the mechanism he identifies is Turing instability applied to social systems. He showed that you do not need pre-existing heterogeneity to get structure; the asymmetry in diffusion alone is sufficient. That is the same core claim made here, applied to a different level of economic organization.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://doi.org/10.3390/math9040351&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Volpert, Petrovskii, and collaborators&lt;/a&gt;&lt;/strong&gt; produced what may be the most directly relevant formal work. They built an economic-demographic model using nonlocal reaction-diffusion PDEs, showing that when resource consumption is nonlocal, a homogeneous wealth-population distribution is replaced by periodic spatial patterns, and that for global consumption of resources, a single wealth accumulation center can emerge. That is exactly the non-local activation argument in this essay producing winner-take-all condensation. They explicitly noted that intellectual resources, unlike natural resources, do not have fixed geographical location and that their transportation cost is not a limiting factor, requiring nonlocal terms in the model. Their math validates the mechanism. What they did not do is connect it to Coase&amp;rsquo;s firm boundary question.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://doi.org/10.1016/j.camwa.2019.07.017&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;A 2024 cross-diffusion paper&lt;/a&gt;&lt;/strong&gt; took another angle. Researchers built a mutualistic model of labor and capital interaction, showed that the uniform profit-optimum becomes unstable under Turing instability conditions, and that the resulting patterns of alternating high and low concentrations can be interpreted as cities. They connect this explicitly to Turing&amp;rsquo;s 1952 framework and to Krugman&amp;rsquo;s geography models, while noting that their continuous-space formulation is more general than Krugman&amp;rsquo;s discrete patches. Again, the level of analysis is spatial (where does economic activity concentrate?) rather than organizational (why do bounded firms exist?).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://doi.org/10.1098/rsif.2021.0034&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;A 2021 Royal Society Interface paper&lt;/a&gt;&lt;/strong&gt; used a coupled economic-demographic reaction-diffusion model to show that population distributions exhibit nearly periodic spatial patterns even in uniform environments, and that Turing instability provides a plausible mechanism. This confirms that the pattern-formation framework applies to economic phenomena on featureless substrates, which is the same substrate assumption made here.&lt;/p&gt;
&lt;h3 id=&#34;what-is-different-here&#34;&gt;What is different here
&lt;/h3&gt;&lt;p&gt;All of these works use reaction-diffusion mathematics on economics. None of them connect it to Coase&amp;rsquo;s theory of the firm.&lt;/p&gt;
&lt;p&gt;Krugman explains where economic activity agglomerates in space. Volpert explains where wealth concentrates. The cross-diffusion paper explains where cities form. This essay asks a different question at a different level of abstraction: why do firms exist as bounded entities at all, and what determines their characteristic scale and morphology?&lt;/p&gt;
&lt;p&gt;The specific move is: the &amp;ldquo;spot&amp;rdquo; is the firm itself (not a city or a region), the boundary is the Coasean make-or-buy margin, and the substrate through which the morphogens propagate is institutional infrastructure (not geography). The activator is coordination gain. The inhibitor is organizational overhead. The lubrication parameter that governs pattern regime is technology&amp;rsquo;s effect on the substrate&amp;rsquo;s diffusion properties.&lt;/p&gt;
&lt;p&gt;The economics literature has reaction-diffusion models for spatial agglomeration. It has Coasean theory for firm boundaries. It has transaction cost economics for why those boundaries shift. The pieces are all on the table. The assembly, connecting the pattern-formation mathematics to the organizational boundary question, appears to be new.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;I&amp;rsquo;m not an economist or a mathematician. I&amp;rsquo;m a pattern noticer. One day, thinking about firms as features on an economic &amp;ldquo;skin,&amp;rdquo; I realized that the simplest explanation for why a uniform substrate develops bounded structures is Turing&amp;rsquo;s, and that Coase had already given the economic version of the same answer, thirty-five years earlier.&lt;/em&gt;&lt;/p&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;Lubrication is not the diffusion of either species. It is a property of the substrate itself, the medium through which both activation and inhibition propagate. In Turing&amp;rsquo;s formulation, diffusion rates are not intrinsic to the chemicals alone; they depend on the medium (viscosity, porosity, temperature). Change the medium, change the pattern. What lubrication specifically does is increase the activator&amp;rsquo;s diffusion rate. Before technology intervenes, coordination gain is sticky and local: it depends on proximity, shared routine, tacit knowledge that does not travel well. The inhibitor (overhead, complexity) already diffuses easily because bureaucratic cost scales regardless of distance. That asymmetry is exactly the Turing instability condition. When you lubricate the substrate (APIs, cloud infrastructure, standardized contracts, outsourcing platforms), you make the activator less local. Coordination can now operate across boundaries that previously confined it. The diffusion ratio narrows, and existing spots become less stable. But total equalization of diffusion rates does not give you a featureless market with no firms. It gives you a regime transition to a different pattern type. The old spots dissolve, but new instability conditions emerge around different activator-inhibitor pairs (network effects versus platform governance costs, for instance), and those produce structure at a different characteristic scale. &amp;ldquo;Lubrication&amp;rdquo; names the substrate parameter that governs pattern morphology without specifying which pattern you get.&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;A guild is not a firm and not a market. It is precisely the labyrinthine morphology: a continuous network of coordinated activity where individual practitioners are connected through shared standards, apprenticeship chains, quality controls, and mutual obligations, but without the clean inside-outside boundary that defines a firm. No single guild member is independent (they are bound by guild rules, pricing conventions, territorial agreements), but no single entity &amp;ldquo;owns&amp;rdquo; the guild. This is exactly the morphology you would predict for artisan-level coordination range (still local, still based on personal trust) but high opportunity rate. Lots of work to organize, limited ability to project coordination at distance. The system cannot form large discrete firms because the coordination range is too short. But it cannot remain as isolated spots either because opportunities are too abundant. So it connects the spots into worms and labyrinths. The historical sequence confirms this: guilds dominated precisely when trade was booming (late medieval commercial expansion) but coordination technology was still local (face-to-face, apprenticeship, personal reputation). When coordination range extended through print, contract law, and accounting, the labyrinths broke apart and re-formed as clean spots: chartered companies, joint-stock firms, modern corporations. The guild was not replaced because firms are &amp;ldquo;better.&amp;rdquo; The guild morphology became unstable when the substrate parameters shifted into a regime that supports spots instead of labyrinths.&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;The full historical sequence maps to four parameter changes. Start at Artisan: many small, well-separated spots. This is the pre-industrial economy of comparable-scale workshops and traders. Increase opportunity rate: the spots connect into labyrinths. This is the late-medieval guild system, driven by expanding trade routes and commercial opportunity that outpaced coordination technology. Increase coordination range (simulating oceanic navigation, chartered companies, early banking networks): the labyrinths condense into one or a few dominant structures. This is the mercantilist era of monopoly trading companies like the East India Company, where coordination technology leaped past the existing inhibitor infrastructure. Finally, increase overhead reach (simulating the development of competition law, corporate regulation, standardized accounting): the monopoly fragments back into distinct bounded structures at a larger scale. This is the liberal economic revolution of the 18th and 19th centuries, not a political choice to have competition, but a pattern regime transition driven by the institutional skin finally developing enough inhibitor-propagation capacity to support a spotted morphology at the new coordination range. Four slider moves, five centuries.&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;Structural antitrust (breaking up monopolies, preventing mergers, forcing divestitures) is an increase in Organizational Decay. You are artificially increasing the rate at which large organizational structures lose coherence. The Sherman Act, the Standard Oil breakup, the AT&amp;amp;T divestiture: these are all external forces that push the decay parameter upward for specific entities. The pattern effect is that oversized spots are forced to fragment. But notice what happens when you increase decay globally in the simulation: you do not just shrink the big spots. You make all firms more fragile. You cannot target the inhibitor at one spot without changing the substrate for everyone. Behavioral antitrust (regulating conduct, imposing interoperability requirements, mandating data portability, preventing exclusionary practices) is an increase in Overhead Reach. You are adding bureaucratic and compliance cost that propagates across the entire organization. GDPR, platform regulation, mandatory API access: these all increase the inhibitor&amp;rsquo;s diffusion. The pattern effect is different. You do not dissolve the large structure directly. You change the diffusion ratio so that the substrate can no longer support spots above a certain size. The large spots shrink or develop internal fractures because the inhibitor now reaches further within them. This is slower but more stable because you are changing the substrate properties rather than attacking individual spots. The framework surfaces something that antitrust policy debates usually miss: these two interventions produce different morphologies even when they both succeed in reducing concentration. Try both on the monopoly preset in the simulation and watch the difference.&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;
&lt;p&gt;In reaction-diffusion systems, labyrinthine states are often metastable rather than truly stable. When the opportunity rate drops (the boom ends), the labyrinths do not gracefully separate back into spots. They fragment chaotically: suddenly interdependent firms discover their entanglements are liabilities, and the pattern collapses into a disorganized state before reorganizing into a new set of distinct spots at whatever scale the post-crash parameters support. This is what a bust looks like in morphological terms. The boom was not firms getting bigger. It was spots connecting under high opportunity pressure, forming tangled webs of partnerships, joint ventures, supply chain entanglement, and overlapping scopes where nobody can tell where one company ends and another begins. The organizational landscape becomes a labyrinth where coordinated activity is continuous and interconnected rather than discrete and bounded. And the framework predicts what comes next: the fragmentation is not a gradual unwinding but a sudden collapse, because labyrinthine states are sensitive to parameter shifts in a way that stable spots are not.&amp;#160;&lt;a href=&#34;#fnref:5&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>Can an LLM Find Design Flaws in Code It Can&#39;t Read?</title>
        <link>https://pub.blocksec.ca/posts/llm-cpg-design-analysis/</link>
        <pubDate>Thu, 19 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/posts/llm-cpg-design-analysis/</guid>
        <description>&lt;img src="https://pub.blocksec.ca/posts/llm-cpg-design-analysis/cover.png" alt="Featured image of post Can an LLM Find Design Flaws in Code It Can&#39;t Read?" /&gt;&lt;p&gt;Security code review with Large Language Models (LLMs) has a structural problem that most people wave away: an LLM reads code one file at a time, but design flaws do not live in one file. They live in the relationships between components, and more often, in what is absent. The auth middleware that was never applied, the validation that does not exist, the rate limiting that covers four routes out of a hundred and nine.&lt;/p&gt;
&lt;p&gt;Pattern-matching tools like &lt;a class=&#34;link&#34; href=&#34;https://semgrep.dev/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Semgrep&lt;/a&gt; see the whole codebase but cannot express &amp;ldquo;find all routes that do NOT have auth middleware.&amp;rdquo; LLMs can reason about that question but cannot see the whole codebase. Both approaches find local bugs well enough, and neither finds design flaws.&lt;/p&gt;
&lt;p&gt;I wanted to know if there was a middle path: give the LLM a structural graph of the codebase that it can query, rather than source code to read. The graph would carry the cross-file relationships while the LLM provides the reasoning, and the question was whether the combination could surface design-level findings that neither tool reaches alone.&lt;/p&gt;
&lt;h2 id=&#34;the-graph-code-property-graphs&#34;&gt;The graph: Code Property Graphs
&lt;/h2&gt;&lt;p&gt;A &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Code_property_graph&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Code Property Graph&lt;/a&gt; (CPG) overlays three views of the same codebase: the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Abstract_syntax_tree&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;abstract syntax tree&lt;/a&gt; (structure), the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Control-flow_graph&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;control flow graph&lt;/a&gt; (execution paths), and the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Program_dependence_graph&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;program dependence graph&lt;/a&gt; (data flow). &lt;a class=&#34;link&#34; href=&#34;https://joern.io&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Joern&lt;/a&gt;, the open-source CPG engine, builds all three into a single queryable graph and exposes a Scala-based query language called CPGQL.&lt;/p&gt;
&lt;p&gt;For the target application, an Express/TypeScript app with custom JWT auth and no input validation, Joern produced a graph of 443,000 nodes. The LLM never touched those nodes directly, and instead formulated CPGQL queries and worked with the small, structured results that came back.&lt;/p&gt;
&lt;h2 id=&#34;two-passes-learn-the-architecture-then-interrogate-the-design&#34;&gt;Two passes: learn the architecture, then interrogate the design
&lt;/h2&gt;&lt;p&gt;The approach has two phases. In Pass 1, the LLM knows only the framework name (&amp;ldquo;this is an Express app with Sequelize&amp;rdquo;) and asks nine generic structural questions: how many routes? what middleware exists? is there input validation? what database operations are exposed? These questions require no knowledge of the specific application, and the answers produce a structural profile of the codebase.&lt;/p&gt;
&lt;p&gt;In Pass 2, the LLM reads the Pass 1 results (still not source code) and writes targeted queries for eight CWE categories under &lt;a class=&#34;link&#34; href=&#34;https://owasp.org/Top10/2025/A06_2025-Insecure_Design/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;OWASP A06: Insecure Design&lt;/a&gt;. This is where it gets interesting, because the LLM can now ask questions like &amp;ldquo;which routes lack any security middleware&amp;rdquo; using negative sub-traversals, a query pattern that filters for absence rather than presence. Pattern matchers cannot express this, and LLMs reading source files cannot hold the full route inventory in context. The CPG makes it tractable.&lt;/p&gt;
&lt;h2 id=&#34;what-i-found&#34;&gt;What I found
&lt;/h2&gt;&lt;p&gt;The analysis produced thirty confirmed findings across eight &lt;a class=&#34;link&#34; href=&#34;https://cwe.mitre.org/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;CWE&lt;/a&gt; categories, with zero false positives when validated against the &lt;a class=&#34;link&#34; href=&#34;https://github.com/BlockSecCA/vulnerable-app&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;target application&amp;rsquo;s&lt;/a&gt; 107 documented security challenges. Some highlights:&lt;/p&gt;
&lt;p&gt;Fifty-eight of 109 routes (53%) had no authentication middleware at all, including password change endpoints, admin panels, and every file upload route. The finding came from a single negative sub-traversal: enumerate all route handlers, filter out those with a &lt;code&gt;security.*&lt;/code&gt; argument, report the rest.&lt;/p&gt;
&lt;p&gt;The crypto implementation used MD5 with no salt for password hashing, a hardcoded HMAC key visible in source, and plaintext storage for TOTP secrets and credit card numbers. Joern&amp;rsquo;s &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Taint_checking&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;taint analysis&lt;/a&gt; traced data flows from user input through to storage to confirm the findings.&lt;/p&gt;
&lt;p&gt;Four file upload routes had validators that existed but did nothing: &lt;code&gt;checkFileType&lt;/code&gt; and &lt;code&gt;checkUploadSize&lt;/code&gt; both called &lt;code&gt;next()&lt;/code&gt; unconditionally. The validators were present in the code, so a quick grep would see &amp;ldquo;validation exists,&amp;rdquo; but the CPG revealed that the validation functions never rejected anything.&lt;/p&gt;
&lt;p&gt;The full results, including all 20 CPGQL queries and their outputs, are documented in the &lt;a class=&#34;link&#34; href=&#34;https://github.com/BlockSecCA/llm-cpg-exploration&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;source repo&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;what-did-not-work&#34;&gt;What did not work
&lt;/h2&gt;&lt;p&gt;The approach has real limits. Joern&amp;rsquo;s JavaScript/TypeScript frontend is less mature than its Java and C support, and some queries that should have been straightforward required workarounds. GitNexus, a complementary code knowledge graph tool, contributed useful architectural context on eight of the twenty questions but could not match Joern&amp;rsquo;s expression-level precision for the CWE-targeted queries.&lt;/p&gt;
&lt;p&gt;More fundamentally, the two-pass method still requires a human to decide which CWE categories to target and to evaluate whether the query results constitute genuine findings. The LLM formulated the CPGQL queries and reasoned about the results, but the analytical framework came from outside. This is closer to &amp;ldquo;LLM-assisted analysis&amp;rdquo; than &amp;ldquo;automated vulnerability detection,&amp;rdquo; and I think that distinction matters.&lt;/p&gt;
&lt;h2 id=&#34;where-this-goes&#34;&gt;Where this goes
&lt;/h2&gt;&lt;p&gt;This was an exploration, not a finished tool, and there are open questions worth pursuing. The immediate one is whether the two-pass pattern generalizes beyond Express to other frameworks and languages where Joern has stronger frontend support (Java, C/C++). The longer-term question is whether the structural query step can be made more autonomous, with the LLM selecting which CWE categories to investigate based on the Pass 1 profile rather than being told.&lt;/p&gt;
&lt;p&gt;I have some ideas on both fronts. More to come when I get to it.&lt;/p&gt;
&lt;h2 id=&#34;resources&#34;&gt;Resources
&lt;/h2&gt;&lt;p&gt;The full interactive presentation walks through the approach slide by slide, including the CPG layer visualizations, CPGQL query examples, and the complete findings table: &lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://cpg.blocksec.ca/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;cpg.blocksec.ca&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The source repo contains all documentation, from initial tool evaluation through final validation, organized in reading order: &lt;strong&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/BlockSecCA/llm-cpg-exploration&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;BlockSecCA/llm-cpg-exploration&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Tools and references from the project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://joern.io&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Joern&lt;/a&gt;, the open-source CPG engine used for all structural queries&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/BlockSecCA/joern-mcp&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;joern-mcp&lt;/a&gt;, the MCP server that wraps Joern&amp;rsquo;s HTTP API and made it possible for the LLM to call queries as tool invocations&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/BlockSecCA/vulnerable-app&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;vulnerable-app&lt;/a&gt;, the target application: a debranded fork of an intentionally vulnerable Express app with 107 documented challenges, stripped of identifying markers to prevent LLM training data leakage during analysis&lt;/li&gt;
&lt;li&gt;Yamaguchi, Golde, Arp, and Rieck, &lt;a class=&#34;link&#34; href=&#34;https://ieeexplore.ieee.org/document/6956589&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Modeling and Discovering Vulnerabilities with Code Property Graphs&lt;/a&gt; (2014), the original paper that introduced the CPG concept&lt;/li&gt;
&lt;li&gt;Lekssays et al., &lt;a class=&#34;link&#34; href=&#34;https://arxiv.org/abs/2507.16585&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;LLMxCPG: A Framework for LLM-Driven Code Vulnerability Detection using Code Property Graphs&lt;/a&gt; (2025), which demonstrated that LLMs can learn CPGQL but left architectural discovery and negative queries unexplored&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://owasp.org/Top10/2025/A06_2025-Insecure_Design/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;OWASP A06:2025 Insecure Design&lt;/a&gt;, the category that framed the entire analysis&lt;/li&gt;
&lt;/ul&gt;
</description>
        </item>
        <item>
        <title>AI and the Digital Garden</title>
        <link>https://pub.blocksec.ca/posts/ai-and-the-digital-garden/</link>
        <pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/posts/ai-and-the-digital-garden/</guid>
        <description>&lt;img src="https://pub.blocksec.ca/posts/ai-and-the-digital-garden/cover.png" alt="Featured image of post AI and the Digital Garden" /&gt;&lt;p&gt;I keep a digital garden. It is a space where I think out loud, connect ideas over time, and publish things I have actually worked through. That practice is now caught in the middle of three forces that generative AI has set in motion: it makes writing better, it floods the web with generated content, and it feeds on the very gardens it degrades.&lt;/p&gt;
&lt;h2 id=&#34;the-upside-writing-with-an-llm&#34;&gt;The Upside: Writing with an LLM
&lt;/h2&gt;&lt;p&gt;The most immediate benefit is collaboration. An LLM can act as a sounding board for brainstorming, push back on weak arguments, and help tighten prose. Jorge Arango described this as the &amp;ldquo;amanuensis&amp;rdquo; role&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;, where the machine handles mechanical work while the human steers the thinking. LLMs can also assist with research by performing search and collating results across sources.&lt;/p&gt;
&lt;p&gt;This is the part that works well, and it is the reason I use LLMs in my own writing process. The key is that the human remains the author. The LLM accelerates; it does not originate.&lt;/p&gt;
&lt;h2 id=&#34;the-flood-generated-content-at-scale&#34;&gt;The Flood: Generated Content at Scale
&lt;/h2&gt;&lt;p&gt;The problem starts when that acceleration is pointed at volume instead of quality. A single person with an LLM can produce more content in a day than they could write in a month. Multiply that across millions of users and you get what some have started calling the &amp;ldquo;Slopocene&amp;rdquo;&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;, a web increasingly dominated by low-effort generated text that is hard to distinguish from human writing.&lt;/p&gt;
&lt;p&gt;The downstream effects compound. Search engines surface generated results alongside human ones with no reliable way to tell them apart. False information scales faster than correction. Feedback and discussion on posts becomes shallow when bots participate alongside people. The overall signal-to-noise ratio drops, and with it, the trust that makes digital gardens worth reading in the first place.&lt;/p&gt;
&lt;p&gt;Maggie Appleton has written extensively about this trajectory. She describes the result as a &amp;ldquo;Dark Forest&amp;rdquo;&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;, borrowing from the Fermi Paradox solution where anything that makes itself visible gets consumed. In the digital version, quality public content attracts bot attention, scraping, and imitation, which pushes genuine creators toward smaller, private spaces. Her illustrations of this dynamic are worth seeing in the original article&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;There is a recursive problem here too. When generated content gets harvested and fed back into model training, model performance degrades over successive generations. Researchers call this Model Autophagy Disorder (MAD), and it means the flood is not just a content problem but a training data problem that makes future models worse.&lt;/p&gt;
&lt;h2 id=&#34;the-harvest-data-as-raw-material&#34;&gt;The Harvest: Data as Raw Material
&lt;/h2&gt;&lt;p&gt;Using the data commons for private profit is not new. Generative AI has intensified the practice because training requires enormous volumes of text, and digital gardens, with their human-curated, publicly accessible content, are prime targets.&lt;/p&gt;
&lt;p&gt;Lars Doucet mapped out where this leads&lt;sup id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;. First, a &amp;ldquo;Market for Lemons&amp;rdquo; dynamic takes hold in the digital world: when spam and generated content become indistinguishable from genuine work, the perceived quality of everything drops. Second, a &amp;ldquo;Great Logging Off&amp;rdquo; follows as creators of all types retreat from the public internet. Third, the remaining communities wall themselves off into private spaces to prevent bot scraping&lt;sup id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;The effect is something like a bottom trawler that destroys the ecosystem while gathering its catch. Digital gardens could disappear as a public good, not because people stop writing, but because the open web becomes too hostile to sustain them.&lt;/p&gt;
&lt;h2 id=&#34;the-tensions-that-hold&#34;&gt;The Tensions That Hold
&lt;/h2&gt;&lt;p&gt;These forces create a set of interlocking conflicts that have no clean resolution yet.&lt;/p&gt;
&lt;p&gt;On the data side, training new models requires vast amounts of high-quality human text. That text has to come from somewhere, and the economics incentivize model makers to avoid paying fair market rates for it&lt;sup id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;7&lt;/a&gt;&lt;/sup&gt;. Content creators and model makers end up in direct opposition, and it does not help that the industry&amp;rsquo;s early data acquisition practices were liberal at best. The same training data is needed to address the risks created by generated content and to keep models current, so the demand is not going away.&lt;/p&gt;
&lt;p&gt;On the content side, generation is extraordinarily cheap compared to human authorship. That cost advantage is attractive to many business models, but the resulting abundance displaces human work and erodes authenticity. The generated data cycles back into training corpora, contributing to model collapse and content homogenization. The very cheapness that makes generated content attractive undermines the quality that gave it value.&lt;/p&gt;
&lt;p&gt;As of this writing, the conflicts are playing out through lawsuits and regulatory proposals, but nothing has resolved the fundamental tension between the demand for training data and the rights of the people who created it.&lt;/p&gt;
&lt;h2 id=&#34;what-this-means-for-this-site&#34;&gt;What This Means for This Site
&lt;/h2&gt;&lt;p&gt;I publish here because I want to think in public and have those thoughts be findable by other practitioners. Every tension described above pushes against that goal. The flood makes it harder to be found. The harvest means anything I publish becomes training material. The erosion of trust means even genuine writing gets viewed with suspicion.&lt;/p&gt;
&lt;p&gt;I do not have a solution to any of that. What I do have is a practice: write things I have actually worked through, show the reasoning, attribute properly, and publish under my own name on a domain I control. That is a small bet that authenticity still matters, even in a landscape that is actively selecting against it.&lt;/p&gt;
&lt;h2 id=&#34;footnotes&#34;&gt;Footnotes
&lt;/h2&gt;&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;The amanuensis role is described by Jorge Arango in &lt;a class=&#34;link&#34; href=&#34;https://jarango.com/2022/12/18/three-roles-for-robots/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Three Roles for Robots&lt;/a&gt;.&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;The term captures the state of a web increasingly flooded with low-quality generated content. It fits better than &amp;ldquo;Generated Web&amp;rdquo; because it emphasizes the quality problem, not just the origin.&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;Described in Maggie Appleton&amp;rsquo;s &lt;a class=&#34;link&#34; href=&#34;https://maggieappleton.com/ai-dark-forest&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;The Expanding Dark Forest and Generative AI&lt;/a&gt;. The Dark Forest term originally refers to a solution to the Fermi Paradox where intelligent life that announces itself is soon extinguished. In the digital version, quality public content that attracts attention gets consumed by bots.&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;See Appleton&amp;rsquo;s article and her &lt;a class=&#34;link&#34; href=&#34;https://theinformed.life/2023/07/16/episode-118-maggie-appleton/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;conversation on The Informed Life&lt;/a&gt; for the full visual treatment.&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;
&lt;p&gt;See Lars Doucet, &lt;a class=&#34;link&#34; href=&#34;https://www.fortressofdoors.com/ai-markets-for-lemons-and-the-great-logging-off/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;AI: Markets for Lemons, and the Great Logging Off&lt;/a&gt;.&amp;#160;&lt;a href=&#34;#fnref:5&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;
&lt;p&gt;Cloudflare has created anti-bot scraping measures such as redirecting bots into mazes from which they never escape, partly because many bots ignore the robots.txt convention.&amp;#160;&lt;a href=&#34;#fnref:6&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;
&lt;p&gt;This feeds the &amp;ldquo;Data is the New Oil&amp;rdquo; framing. I am not sold on the analogy as a whole, but the demand side is real.&amp;#160;&lt;a href=&#34;#fnref:7&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>Colophon</title>
        <link>https://pub.blocksec.ca/page/colophon/</link>
        <pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/page/colophon/</guid>
        <description>&lt;p&gt;A colophon describes how something is made. Here is how this site works.&lt;/p&gt;
&lt;h2 id=&#34;generator-and-theme&#34;&gt;Generator and Theme
&lt;/h2&gt;&lt;p&gt;The site is built with &lt;a class=&#34;link&#34; href=&#34;https://gohugo.io/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Hugo&lt;/a&gt; using the &lt;a class=&#34;link&#34; href=&#34;https://github.com/CaiJimmy/hugo-theme-stack&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Stack&lt;/a&gt; theme, pinned to v3.34.2. Hugo is a static site generator, which means every page is pre-rendered HTML with no server-side processing, no database, and no JavaScript frameworks. The theme handles layout, typography, and dark mode. I have not modified it beyond a footer override and a couple of custom layouts for interactive content.&lt;/p&gt;
&lt;h2 id=&#34;hosting&#34;&gt;Hosting
&lt;/h2&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://pages.cloudflare.com/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Cloudflare Pages&lt;/a&gt; builds and serves the site. A push to the &lt;code&gt;main&lt;/code&gt; branch triggers a build. Drafts are excluded, so nothing reaches the public site until I explicitly flip a post out of draft status. The DNS, CDN, and anti-bot protections are also Cloudflare, which is relevant given the subject matter of some posts here.&lt;/p&gt;
&lt;h2 id=&#34;writing&#34;&gt;Writing
&lt;/h2&gt;&lt;p&gt;Most content starts as research notes in an &lt;a class=&#34;link&#34; href=&#34;https://obsidian.md/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Obsidian&lt;/a&gt; vault. The vault is my thinking space; this site is where the thinking that held up gets published. The two are connected through Git, but the vault contains far more than what reaches the site. Posts go through several rounds of refinement before they leave draft status.&lt;/p&gt;
&lt;p&gt;I use LLMs as writing collaborators, primarily &lt;a class=&#34;link&#34; href=&#34;https://claude.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Claude&lt;/a&gt;. The collaboration is structural: brainstorming, critiquing arguments, tightening prose, and handling file operations through tool integrations. The ideas and editorial judgment are mine. The LLM accelerates the mechanical work, which is the arrangement I described in the &lt;a class=&#34;link&#34; href=&#34;https://pub.blocksec.ca/posts/ai-and-the-digital-garden/&#34; &gt;AI and the Digital Garden&lt;/a&gt; post as the amanuensis role.&lt;/p&gt;
&lt;h2 id=&#34;source&#34;&gt;Source
&lt;/h2&gt;&lt;p&gt;The site source is in a private GitHub repository.&lt;/p&gt;
&lt;h2 id=&#34;content-license&#34;&gt;Content License
&lt;/h2&gt;&lt;p&gt;Unless otherwise noted, the written content on this site is licensed under &lt;a class=&#34;link&#34; href=&#34;https://creativecommons.org/licenses/by-nc/4.0/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Creative Commons Attribution-NonCommercial 4.0 International&lt;/a&gt; (CC BY-NC 4.0). You are free to share and adapt the content for non-commercial purposes, provided you give appropriate attribution and link back to the original.&lt;/p&gt;
&lt;p&gt;Code snippets and configuration examples are provided as-is with no license restriction. Use them however you like.&lt;/p&gt;
&lt;p&gt;If you want to use anything here in a way that falls outside these terms, reach out and ask. I am generally agreeable.&lt;/p&gt;
</description>
        </item>
        <item>
        <title>About</title>
        <link>https://pub.blocksec.ca/page/about/</link>
        <pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/page/about/</guid>
        <description>&lt;p&gt;I&amp;rsquo;m a security architect by trade, but what I actually do is think about how systems hold together and come apart. Security happens to be a good vantage point for that. You learn a lot about structure by studying where it fails.&lt;/p&gt;
&lt;p&gt;My professional work sits at the intersection of cybersecurity, AI security, enterprise architecture, and risk. But the thinking behind it pulls from further out: mechanism design, semantic layering, topological models of how cloud environments actually behave versus how we diagram them. I&amp;rsquo;m drawn to the structural questions underneath the technical ones. What makes a domain boundary real. Why certain abstractions survive contact with production and others collapse. How agency works (and doesn&amp;rsquo;t) in systems that claim to have it.&lt;/p&gt;
&lt;p&gt;AI is a recurring subject here, but not in the &amp;ldquo;here&amp;rsquo;s how to prompt better&amp;rdquo; sense. I&amp;rsquo;m interested in what happens when you stack an LLM inside a framework inside a platform and hand it tools. Where the failure modes live in that stack, and what the security and trust implications look like when you can&amp;rsquo;t fully characterize any single layer, let alone the composition.&lt;/p&gt;
&lt;p&gt;Most of what I write starts as research notes in a personal knowledge vault. The posts that reach this site are the ones where the thinking held up under pressure: a model that clarified something, a breakdown worth documenting, or a connection between domains that turned out to matter. I try to show the reasoning rather than just the conclusion, so you can decide for yourself whether it holds.&lt;/p&gt;
&lt;p&gt;I don&amp;rsquo;t write tutorials. If you build systems, think about risk, or find yourself skeptical of whichever framework is trending this quarter, some of this might be useful.&lt;/p&gt;
</description>
        </item>
        <item>
        <title>Archives</title>
        <link>https://pub.blocksec.ca/page/archives/</link>
        <pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/page/archives/</guid>
        <description></description>
        </item>
        <item>
        <title>Search</title>
        <link>https://pub.blocksec.ca/page/search/</link>
        <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
        
        <guid>https://pub.blocksec.ca/page/search/</guid>
        <description></description>
        </item>
        
    </channel>
</rss>
