Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
>makes it easier to accidentally expose sensitive data.
So does the "forward" button on emails. Maybe be more careful about how your system handles sensitive data. How about:
>MCP allows for more powerful prompt injections.
This just touches on wider topic of only working with trusted service providers that developers should abide by generally. As for:
>MCP has no concept or controls for costs.
Rate limit and monitor your own usage. You should anyway. It's not the road's job to make you follow the speed limit.
Finally, many of the other issues seem to be more about coming to terms with delegating to AI agents generally. In any case it's the developer's responsibility to manage all these problems within the boundaries they control. No API should have that many responsibilities.
A large problem in this article stems from the fact that the LLM may take actions I do not want it to take. But there are clearly 2 types of actions the LLM can take: those I want it to take on it's own, and those I want it to take only after prompting me.
There may come a time when I want the LLM to run a business for me, but that time is not yet upon us. For now I do not even want to send an e-mail generated by AI without vetting it first.
But the author rejects the solution of simply prompting the user because "it’s easy to see why a user might fall into a pattern of auto-confirmation (or ‘YOLO-mode’) when most of their tools are harmless".
Sure, and people spend more on cards than they do with cash and more on credit cards than they do on debit cards.
But this is a psychological problem, not a technological one!
The protocol is in very, very early stages and there are a lot of things that still need to be figured out. That being said, I can commend Anthropic on being very open to listening to the community and acting on the feedback. The authorization spec RFC, for example, is a coordinated effort between security experts at Microsoft (my employer), Arcade, Hellō, Auth0/Okta, Stytch, Descope, and quite a few others. The folks at Anthropic set the foundation and welcomed others to help build on it. It will mature and get better.
[1]: https://github.com/modelcontextprotocol/modelcontextprotocol...
The essay misses the biggest problem with MCP:
1. it does not enable AI agents to functionally compose tools.
2. MCP should not exist in the first place.
LLMs already know how to talk to every API that documents itself with OpenAPI specs, but the missing piece is authorization. Why not just let the AI make HTTP requests but apply authorization to endpoints? And indeed, people are wrapping existing APIs with thin MCP tools.Personally, the most annoying part of MCP is the lack of support for streaming tool call results. Tool calls have a single request/response pair, which means long-running tool calls can't emit data as it becomes available – the client has to repeat a tool call multiple times to paginate. IMO, MCP could have used gRPC which is designed for streaming. Need an onComplete trigger.
I'm the author of Modex[^1], a Clojure MCP library, which is used by Datomic MCP[^2].
[^1]: Modex: Clojure MCP Library – https://github.com/theronic/modex
[^2]: Datomic MCP: Datomic MCP Server – https://github.com/theronic/datomic-mcp/