I’m a security architect by trade, but what I actually do is think about how systems hold together and come apart. Security happens to be a good vantage point for that. You learn a lot about structure by studying where it fails.
My professional work sits at the intersection of cybersecurity, AI security, enterprise architecture, and risk. But the thinking behind it pulls from further out: mechanism design, semantic layering, topological models of how cloud environments actually behave versus how we diagram them. I’m drawn to the structural questions underneath the technical ones. What makes a domain boundary real. Why certain abstractions survive contact with production and others collapse. How agency works (and doesn’t) in systems that claim to have it.
AI is a recurring subject here, but not in the “here’s how to prompt better” sense. I’m interested in what happens when you stack an LLM inside a framework inside a platform and hand it tools. Where the failure modes live in that stack, and what the security and trust implications look like when you can’t fully characterize any single layer, let alone the composition.
Most of what I write starts as research notes in a personal knowledge vault. The posts that reach this site are the ones where the thinking held up under pressure: a model that clarified something, a breakdown worth documenting, or a connection between domains that turned out to matter. I try to show the reasoning rather than just the conclusion, so you can decide for yourself whether it holds.
I don’t write tutorials. If you build systems, think about risk, or find yourself skeptical of whichever framework is trending this quarter, some of this might be useful.