Published Date : 8/8/2025Â
OpenAI’s ChatGPT has made a significant impact globally, and the technology continues to evolve rapidly. The growing sophistication of AI tools is presenting new challenges for the digital world, particularly for AI agents.
In a recent Dock Labs webinar, Peter Horadan, CEO of Vouched, discussed these challenges. An AI agent can be thought of as a personal assistant. For instance, if you want to book a holiday, it can act as a vacation planner.
With the latest version of ChatGPT, the AI agent can take action on your behalf, such as opening a browser window, asking you to fill in sign-in details on a website, and even purchasing plane tickets. However, a long-standing rule in cybersecurity is to never give your username and password to a third party. When you type it into a third-party window, ChatGPT gains a valid session key with the airline.
AI agents are also being used in professional settings. An agent might prompt the user to log in to their company’s information system, thereby gaining access to work systems and potentially the company’s finance and accounting system. Even if ChatGPT performs well, it trains users to believe that it’s acceptable to share their credentials with an AI agent, which is a very poor practice.
ChatGPT’s current method of automating user interactions involves screen scraping and browser automation that impersonates individuals and logs in on their behalf. While Anthropic’s Model Context Protocol (MCP) provides a more controlled framework for agents to retrieve information or perform actions under strict permissions, it lacks essential features for robust identity management.
First, any agent acting on a user’s behalf must be distinctly identified. This means clearly differentiating between the human and the software agent when an action is executed. Users may wish to delegate specific tasks, such as purchasing an airline ticket, without granting full authority for other activities. To facilitate this, we need mechanisms for distributed authentication and role-based delegation that track exactly which rights a human has conferred to a given agent. These capabilities are not currently addressed by the MCP specification but are vital for secure and transparent agent operations.
Second, it is crucial to track the reputation of agents. Just as email systems struggled with phishing due to a lack of native safeguards, the emerging ecosystem of autonomous agents will include both good and bad actors. There will be scam agents, fraudster agents, or hustler agents. Horadan suggests we need a way to monitor an agent’s behavior and accumulate feedback, similar to a Yelp system but for AI agents. A comprehensive reputation framework would allow platforms to flag agents that consistently violate user expectations or demonstrate malicious intent.
Moreover, legal and contractual considerations must be rethought in an agentic environment. Standard checkboxes for terms and conditions or electronic acceptance of contracts assume a conscious human decision. If an agent automatically consents on behalf of its user, the validity of such agreements may be legally questionable. Therefore, there will have to be a way to ensure that agents either prompt for explicit human confirmation or operate under pre-negotiated legal frameworks that clearly delineate the scope of their authority.
To address these challenges, Vouched has proposed a Know Your Agent framework and an Identity Extension for MCP. Drawing on principles from OAuth 2.0, this specification would enable durable, scoped authorizations tied to a session key that the agent presents when requesting permitted actions. It would also clearly identify the agent’s own credentials, separate from the user’s identity, and include a reporting mechanism through which service providers submit structured feedback to an impartial rating authority.
This is MCPI, the identity extensions to Anthropic’s MCP protocol, a new identity layer from Vouched. In the presentation, Horadan expanded on how MCPI fits into existing IAM and CIAM systems, and the role of mDLs, EUDI, and verifiable credentials.
The presentation also included a demo, showing how it all would work conceptually, in an easier-to-parse visual flow.
A paper offers a model designed to protect behavioral, biometric, and personality-based digital likeness attributes as generative AI and its products become more widespread. The “Digital Identity Rights Framework” (DIRF) sets out a framework for digital identity protection and clone governance in agentic AI systems.
Formulated by a team of researchers from academia and companies including Nokia, Deloitte, and J.P. Morgan, the DIRF paper is available on arxiv. It defines 63 enforceable identity-centric controls across nine domains, with each control categorized as legal, technical, or hybrid. These domains, such as identity consent, model training governance, traceability, memory drift, and monetization enforcement, help protect individuals against unauthorized use, modeling, and monetization of their digital identity.
Interestingly, the framework not only aims to protect human identity but also improves AI system performance. According to evaluation results, the DIRF framework substantially enhances LLM performance across metrics, achieving greater prompt reliability and execution stability.
The authors outline an implementation roadmap and how it can be operationalized in AI systems, as DIRF is compatible with AI security layers such as NIST AI RMF and OWASP LLM Top 10 among others.Â
Q: What is the main challenge with AI agents like ChatGPT?
A: The main challenge is that AI agents can impersonate users and perform actions on their behalf, which can lead to security and privacy issues, especially if users share their credentials with these agents.
Q: What is the Model Context Protocol (MCP) and what does it lack?
A: The Model Context Protocol (MCP) is a framework for controlling agent actions and retrieving information. It lacks essential features for robust identity management, such as clear differentiation between human and agent actions and mechanisms for distributed authentication and role-based delegation.
Q: What is the Know Your Agent framework proposed by Vouched?
A: The Know Your Agent framework, proposed by Vouched, is a specification that enables durable, scoped authorizations tied to a session key. It also includes clear identification of the agent’s own credentials and a reporting mechanism for structured feedback.
Q: Why is tracking the reputation of AI agents important?
A: Tracking the reputation of AI agents is important to identify and flag agents that consistently violate user expectations or demonstrate malicious intent, similar to how email systems flag phishing attempts.
Q: What is the Digital Identity Rights Framework (DIRF)?
A: The Digital Identity Rights Framework (DIRF) is a model designed to protect behavioral, biometric, and personality-based digital likeness attributes. It sets out a framework for digital identity protection and clone governance in agentic AI systems.Â