> EPISODE
Ep. 2: Auto-generating an MCP server & client with Stainless
In the past, users were limited to what SaaS apps exposed through their dashboards - uploading CSVs, manually provisioning resources, toggling features in the UI, and so on. MCP shifts this model by letting AI agents handle tasks directly, but for it to work, APIs must be intentionally crafted to be AI-accessible.
The challenge is that most MCP servers are built from one of two flawed approaches:
- Naively generating tools from OpenAPI specs with open source generators doesn't work well. You end up with too many tools, bloated definitions, and long sequences of API calls that LLMs can’t reason through.
- Starting simple and manually creating a few high-value tools works at first, but quickly hits scaling issues. Most APIs have 50+ endpoints, and only a subset is relevant. LLMs don’t need every request param or response field. So how do you choose what to expose, and how to simplify it?
In this session, we’ll explore what it takes to build usable, scalable toolsets for AI agents. We’ll share early experiments, including approaches for selecting, filtering, and chunking tools, and highlight where things still break down, especially around orchestration and reasoning.