The premise of vertical AI is that industries with high document volumes, complex domain-specific reasoning and legacy cost structures can support platforms built specifically for that context — not AI adapted from a general tool, but systems architected around the workflows, data structures, and decision logic of a particular industry.
The potential moat, when it works, compounds in interesting ways. Every workflow embedded, every data source connected, every decision logged makes the platform more useful and harder to replace. The system of record becomes the system of intelligence. That's a different dynamic from horizontal tools, which face ongoing commoditisation pressure as foundation models improve and switching costs stay low.
The verticals that seem most interesting to us tend to share a few characteristics. They run on fragmented data — documents, signals, institutional knowledge spread across disconnected systems — where there's no shortage of information but a real shortage of intelligence over it. They involve high-stakes, time-sensitive decisions where faster and more accurate analysis has obvious and quantifiable value. And they've historically underinvested in software, relying on people and process in ways that leave them exposed when a genuinely capable alternative emerges.
That said, we're aware this is a thesis with real risks. Domain specificity is a strength until the domain shrinks, or until a sufficiently capable horizontal tool closes the gap. Building deep takes longer and costs more. And the behaviour change required to get adoption in entrenched industries is never straightforward.