Beyond A2A and MCP: How LOKA’s Universal Agent Identity Layer changes the game

Become a member of our daily and weekly newsletters for the latest updates and exclusive content about leading AI coverage. Leather
Agentic interoperability wins from steam, but organizations continue to propose new interoperability protocols, while industry continues to find out which standards should be applied.
A group of researchers from Carnegie Mellon University proposed a new interoperability protocol with regard to the identity, accountability and ethics of autonomous AI agents. Layered orchestration for knowledgeful agents, or Loka, can become a member of other proposed standards such as Google’s Agent2Agent (A2A) and Model Context Protocol (MCP) from anthropic.
In one paperThe researchers noted that the rise of AI agents underlines the importance of rule them.
“As their presence spreads, the need for a standardized framework to rule their interactions becomes first,” the researchers wrote. “Despite their growing omnipresence, AI agents often work in Siled Systems, without a common protocol for communication, ethical reasoning and compliance with the jurisdiction regulations. This fragmentation is significant risks, such as interoperability issues, ethical incorrect alignment and dogs of accountability.”
To tackle this, they propose the Open-Source Loka, who would enable agents to prove their identity, “to exchange semantically rich, ethically annotated messages”, add accountability and ethical administration during the agent’s decision-making process.
Loka builds on what the researchers call a Universal Agent Identity Layer, a framework that assigns agents a unique and verifiable identity.
“We zien Loka als een fundamentele architectuur en een oproep om de kernelementen opnieuw te onderzoeken – identiteit, intentie, vertrouwen en ethische consensus – die agent interacties moeten ondernemen. Terwijl de reikwijdte van AI -agenten uitbreidt, is het cruciaal om te beoordelen of onze bestaande infrastructuur verantwoordelijk kan faciliteren, ‘RaJesh Ranjan, een van de onderzoekers, een van de onderzoekers, een van de Researchers, one of the researchers, one of the researchers.
Loka -Lagen
Loka works as a layered pile. The first pile revolves around the identity, which determines what the agent is. This includes a decentralized identification, or a “unique, cryptographically verifiable id.” This allows users and other agents to verify the identity of the agent.
The next layer is the communication layer, whereby the agent informs another agent about his intention and the task it must perform. This is later followed by the ethics and the layer of security.
Loka’s ethical layer determines how the agent behaves. It contains “a flexible but robust ethical decision -making framework with which agents can adapt to various ethical standards, depending on the context in which they work.” The LOKA protocol uses collective decision-making models, so that agents can determine their next steps within the framework and can assess whether these steps match the ethical and responsible AI standards.
In the meantime, the layer of security uses what the researchers describe as “Kwantum resilient cryptography.”
What distinguishes Loka
The researchers said that Loka stands out because it determines crucial information for agents to communicate with other agents and to work autonomously in different systems.
Loka can be useful for companies to guarantee the safety of agents they use in the world and to offer a traceable way to understand how the agent has made decisions. A fear that many companies have is that an agent will tap into a different system or get access to private data and make a mistake.
Ranjan said that the system “emphasizes the need to determine who agents are and how they make decisions and how they are held responsible.”
“Our vision is to alleviate the critical questions that are often overshadowed in the hurry to scale AI agents: how do we create ecosystems where these agents can be trusted, held responsible and ethical interoperable about different systems?” Said Ranjan.
Loka will have to compete with other agent protocols and standards that are now on the rise. Protocols such as MCP and A2A have found a large audience, not only because of the technical solutions they offer, but because these projects are supported by organizations that know people. Anthropic started with MCP, while Google supports A2A and both protocols have opened many companies to use – and improve – these standards.
Loka works independently, but Ranjan said that they have received “very encouraging and exciting feedback” from other researchers and other institutions to expand the LOKA research project.
Source link