Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ The Agent2Agent Protocol (A2A)
zellyn 9 daysReload
It’s frustratingly difficult to see what these (A2A and MCP) protocols actually look like. All I want is a simple example conversation that includes the actual LLM outputs used to trigger a call and the JSON that goes over the wire… maybe I’ll take some time and make a cheat-sheet.

I have to say, the endorsements at the end somehow made this seem worse…


hliyan 9 daysReload
Are we rediscovering SOA and WSDL, but this time for LLM interop instead of web services? I may be wrong, but I'm starting to wonder whether software engineering degrees should include a history subject about the rise and fall of various architectures, methodologies and patterns.

phillipcarter 9 daysReload
A key difference between MCP and A2A that is apparent to me after building with MCP and now reading the material on A2A:

MCP is solving specific problems people have in practice today. LLMs need access to data that they weren't trained on, but that's really hard because there's a millions different ways you could RAG something. So MCP defines a standard by which LLMs can call APIs through clients. (and more).

A2A solves a marketing problem that Google is chasing with technology partners.

I think I can safely say which one will still be around in 6 months, and it's not the one whose contributors all work for the same company.


Flux159 9 daysReload
Some very quick initial thoughts - the json spec has some similarities to mcp: https://google.github.io/A2A/#/documentation?id=agent-card - there's an agent card that describes capabilities that google wants websites to host at https://DOMAIN/.well-known/agent.json according to https://google.github.io/A2A/#/topics/agent_discovery so crawlers can scrape to discover agents.

The jsonrpc calls look similar-ish to mcp tool calls except the inputs and outputs look closer to the inputs/outputs from calling an LLM (ie messages, artifacts, etc.).

The JS server example that they give is interesting https://github.com/google/A2A/tree/main/samples/js/src/serve... - they're using a generator to send sse events back to the caller - a little weird to expose as the API instead of just doing what express allows you to do after setting up an sse connection (res.send / flush multiple times).


simonw 9 daysReload
I just published some notes on MCP security and prompt injection. MCP doesn't have security flaws in the protocol itself, but the patterns it encourage (providing LLMs with access to tools that can act on the user's behalf while they also may be exposed to text from untrusted sources) are rife for prompt injection attacks: https://simonwillison.net/2025/Apr/9/mcp-prompt-injection/