Two Approaches to Helping AI Agents Use Your API (And Why You Need Both)
Thierry Damiba
·January 28, 2026

On this page:
AI coding agents fail in predictable ways when working with APIs. Two recent approaches from Mintlify and Armin Ronacher attack different failure modes. Understanding both reveals something useful about how agents should interact with developer tools.
Two Failure Modes
When an agent writes code against your API, it can fail because:
It doesn’t know what it doesn’t know. The agent uses a deprecated method, misconfigures a parameter, or violates a constraint that isn’t obvious from type signatures. This is the “known unknowns” problem: things the API maintainer knows but the agent doesn’t.
It can’t discover what exists. The agent doesn’t know what collections exist, what the payload schema looks like, or what data is actually in the system. This is the “unknown unknowns” problem: things specific to the user’s environment that no amount of documentation covers.
Most agent failures trace back to one of these. Mintlify’s SKILL.md aproach addresses the first. Armin Ronacher’s REPL-first MCP addresses the second.
What SKILL.md Gives You
SKILL.md is an emerging open standard for shipping knowledge to agents before they write code. The idea has roots in the Cloudflare RFC, the agentskills proposal, and Vercel’s skills CLI. Mintlify’s blog post by Michael Ryaboy showed how to apply it in practice. Decision tables for component selection, explicit gotchas sections, and auto-generating skill files from existing docs. A skill.md isn’t documentation. It’s a briefing. Decision tables, not tutorials. Gotchas, not explanations.
For Qdrant, a skill.md might include:
This prevents the agent from using client.search() (deprecated), creating a collection per user (anti-pattern), or misconfiguring sparse vectors (common mistake). The guidance for all of these exists across Qdrant’s tutorials, docs, and community discussions. However, finding it requires existing Qdrant context because you need to already know enough to ask the right questions. Skills package that accumulated product intuition so agents don’t need to build it from scratch.
What REPL-First MCP Gives You
Armin Ronacher, creator of Flask and now building Earendil, proposed a different approach. Instead of 30 narrow MCP tools, give the agent a Python shell with the SDK pre-configured:
# Agent can just run this
collections = client.get_collections()
print([c.name for c in collections.collections])
# Then inspect the actual schema
info = client.get_collection("products")
print(info.config.params.vectors)
The agent discovers what exists by asking the system directly. No tool for “list collections.” No tool for “get schema.” Just Python. The REPL handles the unknown unknowns: what’s actually in your Qdrant instance right now.
Why Neither Alone Works
skill.md without REPL: The agent knows how to use query_points but not what to query. It guesses collection names. It assumes payload fields. It writes syntactically correct code that fails at runtime.
REPL without skill.md: The agent can discover what exists but still uses deprecated methods. It creates collections with wrong configurations. It makes the same mistakes it would have made without the REPL, just with more information about the data.
Together, the agent workflow looks like this:
The skill.md prevents known mistakes. The REPL handles environment-specific discovery. Both failure modes addressed.
Implementation
The skill.md is just a file you drop into your project. The REPL is an MCP tool. A minimal implementation:
@server.call_tool()
async def handle_tool_call(name: str, arguments: dict):
if name == "qdrant-repl":
code = arguments["code"]
# client, models pre-configured in repl_globals
try:
result = eval(compile(code, "<repl>", "eval"), repl_globals)
return repr(result) if result else "(ok)"
except SyntaxError:
exec(compile(code, "<repl>", "exec"), repl_globals)
return "(executed)"
State persists between calls. The agent builds up context incrementally.
The Broader Pattern
This isn’t specific to Qdrant. Any API with:
- Deprecated methods or migration paths → needs skill.md
- User-specific state (databases, collections, schemas) → needs REPL
The two approaches complement because they address orthogonal problems. Static knowledge for static mistakes. Dynamic access for dynamic discovery.
Most developer tools need both.
This Doesn’t Make It Easy
Adding skill.md and a REPL doesn’t mean agents suddenly work flawlessly. They still hallucinate. They still misunderstand requirements. They still write code that technically runs but doesn’t do what you wanted.
What these tools do is eliminate unnecessary failures. The agent won’t fail because it used a deprecated method. That’s a solved problem with skill.md. It won’t fail because it guessed a collection name. The REPL lets it check. But it can still fail because it misunderstood what you meant by “similar products” or because the embedding model you’re using doesn’t capture the semantics you care about.
You might ask: why not just pull context from docs automatically? Tools like mcp-code-snippets, Qdrant’s MCP server for searching documentation and code examples, solve a real problem, but they solve a different one. Auto-generated context gives you API surface area. A SKILL.md gives you judgment. “Don’t create one collection per user” isn’t obvious from any API reference. “BM25 needs Modifier.IDF” is documented, but you have to know to look for it. Skills distill that intuition into a format agents can use immediately. The two approaches aren’t in competition. Use both.
The goal isn’t perfect agents. The goal is agents that fail for interesting reasons instead of boring ones. Deprecated API calls are boring failures. Wrong collection names are boring failures. Skill.md and REPL handle the boring stuff so you can focus on the hard problems.
Try It
The Qdrant Python skill is just 94 lines, and there’s also a Rust skill for the gRPC client. Both are minimal, adaptable examples for your own setup. The repo also includes the AGENTS.md format alongside SKILL.md for flexible configuration.
Drop it into your project:
mkdir -p .claude/skills/qdrant
curl -o .claude/skills/qdrant/SKILL.md \
https://raw.githubusercontent.com/qdrant/skills/main/skills/qdrant-python/SKILL.md
For the REPL side, configure the Qdrant MCP server with REPL. It’s a fork of the official Qdrant MCP server that adds a qdrant-repl tool giving agents a stateful Python shell with a pre-configured QdrantClient. For docs lookup, mcp-code-snippets gives agents semantic search over Qdrant documentation and code examples. The REPL fork layers exploratory access on top. The agent gets the briefing, the docs, and the toolkit.
Your agents will stop using deprecated methods. They’ll stop guessing collection names. They’ll write correct queries on the first attempt. Not because they got smarter, but because they got the right information at the right time.
We’d love to hear how you’re using Qdrant with AI agents. What’s working, what’s breaking, what patterns you’ve found. If you want to stay ahead of the curve on this stuff, sign up for our newsletter. And thanks to you, our readers, for pushing us to make the developer experience better for everyone.