In this post I am trying to implement an MCP server for the STFU project (an AI-powered satirical diary generator for billionaires) and try to use it with claude.ai. I think that supposedly classifies it as a Custom connector in claude.ai parlance.
Why do I need an MCP server at all?
What even is MCP? Model Context Protocol is an emerging standard for how LLMs like Claude and Gemini integrate with external systems - essentially an API layer for agents.
I, as most humans, am lazy. Being lazy is what makes me search for ways to do things in an easier way, I want to have better content in the project, I want the prompts to be optimized, I want to have more flexibility in how I do things.
Most of all I want to have the support of an agent when doing those things. Run through the changes, extract data, learn how to improve the process. Close the feedback loop on content generation, make it shorter, more efficient.
How did I implement it?
Phase 1: Foundation (Jan 18, 2026)
Initial MCP server setup with OAuth authentication and core tools.
I had Claude build the core component, mostly from scratch but using the MCP protocol npm package. This started getting complicated and I wanted to move away from custom code, so I chose to integrate mcp-handler from Vercel, the integration went well until I deployed it to the staging environment.
On staging I started getting weird 500 errors, just randomly breaking, not just on the MCP routes, it would start breaking on the app itself. I tried to understand what was the cause, but couldn’t find it. The whole flow for authentication kept breaking / not succeeding, there was little for me to debug on the claude.ai side, the vercel deployment worked on everything else, I hit the MCP, and the random 500s came back.
I was stumped, I couldn’t think of what could be breaking it, so I decided to use the best approach I could think of, I went to sleep and made it a problem for the next day.
Phase 2: Handler Refactoring (Jan 19, 2026)
Woke up, time to work. I kept thinking back to it breaking and one thing that made me think was the following: only a subset of requests would start throwing 500s, and only for a little time after hitting the MCP. My suspicion was a tainted function runner, something that the mcp-handler did during the initialization would break the actual implementation, we would get 500s, the only log was a complaint about a missing response object.
Since we live in the time of agents, and testing hunches has become fairly cheap, so my thought was the following: mcp-handler creates some state in the function that breaks my server! Claude Code to the rescue: research the mcp-handler package and find state present in it. It researched, it found, it wrote a custom handler based on the mcp-handler that is fully stateless. Pronto: 500s are gone, we are back baby!
So, now I can see the flow working until the registration for the MCP client is attempted. Wait, what registration for the MCP client?
Phase 3: OAuth & Cross-Domain Support (Jan 24, 2026)
Ah, there needs to be a bit more to the game, no? Setting up dynamic client registration is the next step at supporting claude.ai natively, allowing it to register the client when it starts the flow. Now I am seeing the actual flow get to the end of the auth, we can see a token getting generated.
The reported status is still disconnected, though. What gives? I keep researching and find out that I am not that good at reading documentation, for the integration with claude AI, I need to make sure I have a custom domain for the MCP. Another side quest, now learning how to setup an alternative domain on NextJS through rewrites, and hey pronto, we are now running on a new subdomain that maps the whole MCP story over.
Now I get to the point where claude.ai constantly requests the root endpoint with GET, but SSE isn’t enabled, so it shouldn’t be returning a 200, or 202… So now we have to start returning 405 with information that we only support POST, HEAD, OPTIONS, but no GET.
I thought this was the end, and indeed it was, just not the way I thought.
Trying to get Claude.ai to work was a pain, I could tell the auth was finishing, the token was dispensed (I had a log in Vercel for it now). I could trace the majority of the flow if I inspected the connectors page in a browser, seeing the OAuth 2.1 flow completing. I kept trying this for quite a bit of time, 2-3 hours on and off, but just couldn’t seem to get it working - it wouldn’t do that POST to initialize. I had Claude do some research online and found I wasn’t alone - there are multiple open issues on the MCP GitHub describing exactly this behavior (see Known Issues below). Cold comfort when you’re debugging at midnight.
I decided to pivot and connect it to Claude Code instead, it just seemed easier. And it worked, it still works. I will keep looking for a solution with Claude.ai, but for now, it is enough for me to be able to close the loop in Claude Code.
Payoff
So, I spent almost a week (well, a couple of hours over the course of a week) to set up an MCP. What did I get out of it? Any improvements?
My average cost for generation over the last month before using the MCP was around 1.45$ / day, this is enough to cover generating all three entries, generating the social media posts (13 altogether) and any other additional topics. Through the interactions with Claude Code I was able to change this to around 0.9$ instead. That’s a 33% savings. In the process I also ended up migrating from my “hand-written” templating engine to use handlebars instead, creating a load of flexibility in the process. This is also the first outcome, I will be working on more.
How did I achieve the savings?
- Exposed 20+ tools through the MCP server
- Had Claude code review the latest conversations that generate the content:
- compared my templates to the outputs
- found duplication in the context (I had started with single calls, not a conversation)
- removed the duplication
- also optimized the templates on the way
The output is now pretty much the same as before (i.e. quality wise), but costs less.
The main learning? It is incredibly hard to debug issues with a custom connector on Claude.ai, there isn’t a console to tell you what is wrong, there are no logs (you can see your handlers on Vercel, but nothing on the Anthropic side).
This isn’t too surprising, new technology etc. but it is frustrating when you are trying to get an integration going. I am sure it will keep getting better.
Known Issues & Resources
If you’re attempting a Claude.ai custom connector and hitting walls, you’re not alone. As of January 2026, there are several open issues on the MCP GitHub that describe exactly what I experienced:
| Issue | Description |
|---|---|
| #1675 | Tools not visible despite successful OAuth - same servers work with Claude Desktop and ChatGPT |
| #1674 | Token never sent in MCP requests after OAuth exchange completes |
| #653 | Claude.ai sends invalid claudeai scope regardless of server metadata |
| #908 | Protocol unclear on when tools/list should be called (closed) |
The common thread: servers that work fine with Cursor, MCP Inspector, or ChatGPT fail silently on Claude.ai. If you’re stuck, try Claude Code or Claude Desktop instead - they use a different client implementation that seems more reliable.