Introducing Substrate




Today, we’re launching Substrate. We’re also announcing our $8M Series Seed led by Lightspeed.

We believe the most robust, reliable, and productive uses of AI are when many models are used in coordination with each other, with regular programming mixed in. This leads to more capable, more reliable, and more interpretable AI systems.

Most people building with AI already know this. But unless you work for Google, the main barrier to realizing multi-step AI workloads is an infrastructure one. For the rest of us, we’re left creating an unwieldy mess of chained API calls to multiple providers, with many roundtrips in between. We took a hard look at this state of affairs, and recognized how much it is stifling progress.

Building large multi-step AI workloads requires sophisticated high-performance tooling and infrastructure. Nobody wants to deal with more tooling and infrastructure… but everyone would benefit from simple, intuitive interfaces that abstract away a powerful system underneath.

No tooling, no infrastructure – just elegant abstractions.

Substrate is the first inference API optimized for multi-step AI workloads. With Substrate, you connect nodes from a curated library that includes optimized ML models, built-in file and vector storage, a code interpreter, and logical control flow. By simply connecting nodes, you describe a graph program, which Substrate then analyzes and runs as fast as possible. Entire graphs of many nodes will often run on a single machine, with microsecond communication between tasks.

We’ve been working on Substrate privately for nearly a year. We’ve battle-tested the product with great customers like Substack, and we’re finally ready to open access to everyone.

Let us know what you think. We can’t wait to see what you build.