The biggest AI privacy problems no one is talking about: Installment 1: The Agent2Agent (“A2A”) Protocol

In the privacy world, everyone is focused on fairness, bias, and data scraping. These issues, however, are not even among the top 3 AI privacy issues. And it’s not even close. This is not to say those issues are unimportant. Indeed, the contrary is true. The point is instead that they won’t be the issues that are initially going to create the most significant outsized risk for businesses.

The most significant class of AI privacy risks, by far, arise from the wildly popular family of information-sharing agentic protocols—protocols that allow AI agents to use external tools and resources (the Multi Context Protocol or “MCP”) and that allow AI agents to talk to other AI agents (the Agent2Agent Protocol or “A2A Protocol”). The purpose of both these protocols is to facilitate the flow of information in and out of the primary AI agent in order to either receive additional data inputs from external resources (MCP) or to feed data to third party AI agents for the purpose of performing tasks not provided by the primary AI agent (A2A Protocol).

Neither MCP nor the A2A Protocol supports common consumer privacy requirements or critical enterprise privacy/security obligations (other than basic authentication and secure transmission). Nevertheless, engineers love these protocols since they allow AI agents to easily access external resources and third-party AI agents in an easy, plug-and-play, and standardized way. As a result, there is a 100% chance that these protocols will be baked into the first AI tools that your company will ask you to approve, either for use in enterprise environments or for consumer-facing products, services, applications, or websites.

Today we will talk about the A2A Protocol and leave MCP for a subsequent post. 

For starters, the A2A Protocol is a an open-source project by Google and its core documentation and code can be found on GitHub. 

The Official A2A Protocol Mission

As noted, the core purpose of the A2A Protocol, according to its own documentation, is to allow for inter-AI agent communication. A2A aims to “break down silos” and allow AI agents to connect with one another “across different ecosystems.” The A2A Protocol seeks to do this while “preserving opacity,” i.e. allowing AI agents to “collaborate without needing to share internal memory, proprietary logic, or specifical tool implementations…”.

The Privacy Translation

The A2A Protocol creates a number of critical privacy and security problems for company AI agents (regardless of whether the agent is homegrown or provided by a vendor).

  1. Invisible Sharing of Data. The company AI agent will invisibly disclose and transmit data—probably including personal information—to 3rd parties to which the company AI agent connects using the A2A Protocol.
  2. Hidden Implementation. The company AI agent will likely not disclose to you how and when it uses the A2A Protocol.  In other words, whether the A2A Protocol is implemented in the company AI agent will not be obvious on the face of the AI agent, but instead will be buried in code.
  3. Lack of Transparency by Design. When the company AI agent uses the A2A Protocol, it will necessarily—as a feature of the protocol itself—be blind to the key operational privacy details of the outside AI agent to which it connects. The A2A protocol not only “abstracts away” the operational details of the outside AI agent (an “A2A server”), it enforces this as a security guarantee via technical requirements around “opacity.”  Thus, an A2A Server only makes available a very small number of representations about its operational details (via the “Agent Card” – JSON string that contains information about what the AI agent can do and how to interact with it.  See below.) but none of these have anything to do with privacy.
  4. Promiscuity. Google, likely inspired by its ad tech model, has designed the A2A Protocol to permit arbitrary “discovery” of A2A servers, i.e., “to allow agents to dynamically find and understand the capabilities of other agents.” See A2A Specification. This means it is very likely that the company AI Agent will not decide until runtime which 3rd party AI agents it wants to connect to via the A2A Protocol. That may be great for flexibility, but for privacy and security, it’s an absolute nightmare for anything but the most trivial services. The reason is because it won’t permit Legal or governance mechanisms to evaluate the suitability of the third-party AI agent with respect to privacy and overarching security concerns.

The maintainers of the A2A Protocol might object to the above characterization and respond by claiming that transparency, privacy, and security are provided by the “Agent Card” which every A2A server must make available to A2A client software. The Agent Card, however, is completely useless for security and privacy because:

(1) the “Agent Card” template for A2A servers doesn’t contain any specified fields relative to privacy (e.g. how the third party will use personal information, whether it will be shared, how long it will be stored, etc., etc.). Zero.  None.  Nada.

(2) the A2A server can arbitrarily and without oversight put whatever it wants on the card—it can lie with impunity.

The bottom line is no one is going to stop endpoints from lying on their A2A Agent Cards. Of course, for privacy purposes, it won’t even matter since the specification contains ZERO fields for privacy/data protection. 

The entire A2A Specification prattles on for 9,501 words before, on the very last line of the entire document, it finally gives the most perfunctory hand wave in the general direction of privacy:

Data Privacy: Adhere to all applicable privacy regulations for data exchanged in Message and Artifact parts. Minimize sensitive data transfer.

Here is the translation of the foregoing tech speak: “Please follow privacy laws when using A2A and don’t send a lot of sensitive data.” Google’s sum total of privacy protections for the A2A Protocol is basically the Vulcan valediction: “Live Long and Prosper (and, pretty please, don’t violate ‘dem privacy laws).” As noted at length above, there are no mechanisms for transparency or control relative to privacy, and there is no method for making privacy representations, much less for enforcing them.

Any enterprise application and any consumer-facing application that embeds the A2A Protocol would do well to have a comprehensive static and dynamic analysis of the relevant code base to run-to-ground any potential A2A issues, because there are probably going to be a bunch….

Agent Card Structure

About the Author:

Steven B. Roosa, a partner in the Norton Rose Fulbright’s New York office, created NT Analyzer, the firm’s privacy testing tool suite that uses network traffic analysis, and he actively develops and evaluates AI applications for various use-cases.