Published Date : 8/8/2025Â
OpenAI’s ChatGPT has made a significant impact worldwide, and the technology is rapidly advancing. The growing sophistication of AI tools is creating new challenges for the digital world, particularly for AI agents.
In a recent Dock Labs webinar, Peter Horadan, CEO of Vouched, discussed the issue. An AI Agent can be understood as a personal assistant. For example, if you want to book a holiday, an AI agent can become a vacation planner.
With the latest version of ChatGPT, the AI agent can take action for you, such as opening a browser window, asking you to fill in sign-in details on a website, and even buying plane tickets. However, a long-standing rule in cybersecurity is to never give your username and password to any third party. If you type it into a third-party window, ChatGPT now has a valid session key with the airline.
People are using AI agents at work, where the agent might prompt the user to log in to their company’s information system. This means the agent is logged in as the user in their work systems, including the company’s finance and accounting system. Even if ChatGPT performs well, it is training users to give their credentials to an AI agent, which is a very bad practice.
ChatGPT’s current approach to automating user interactions relies on screen scraping and browser automation that impersonates individuals and logs in on their behalf. While Anthropic’s Model Context Protocol (MCP), released earlier this year, provides a more controlled framework, it lacks essential features for robust identity management.
First, any agent acting on a user’s behalf must be distinctly identified. This means clearly differentiating between the human and the software agent when an action is executed. Users may wish to delegate specific tasks, such as purchasing an airline ticket, without granting full authority for other activities. To facilitate this, we need mechanisms for distributed authentication and role-based delegation that track exactly which rights a human has conferred to a given agent. These capabilities are not currently addressed by the MCP specification but are vital for secure and transparent agent operations.
Second, it is imperative to track the reputation of agents. Just as email systems struggled with phishing due to a lack of native safeguards, the emerging ecosystem of autonomous agents will produce both good and bad agents. There will be scam agents, fraudster agents, or hustler agents. We need a way to monitor an agent’s behavior and accumulate feedback, similar to a Yelp system but for AI agents. A comprehensive reputation framework would allow platforms to flag agents that consistently violate user expectations or demonstrate malicious intent.
Moreover, legal and contractual considerations must be rethought in an agentic environment. Standard checkboxes for terms and conditions or electronic acceptance of contracts assume a conscious human decision. If an agent automatically consents on behalf of its user, the validity of such agreements may be legally questionable. Therefore, there will have to be a way to ensure that agents either prompt for explicit human confirmation or operate under pre-negotiated legal frameworks that clearly delineate the scope of their authority.
To address these challenges, Vouched has proposed a Know Your Agent framework and an Identity Extension for MCP. Drawing on principles from OAuth 2.0, this specification would enable durable, scoped authorizations tied to a session key that the agent presents when requesting permitted actions. It would also include clear identification of the agent’s own credentials, separate from the user’s identity, and a reporting mechanism through which service providers submit structured feedback to an impartial rating authority.
This is MCPI, the identity extensions to Anthropic’s MCP protocol, a new identity layer from Vouched. In the presentation, Horadan expanded on how MCPI fits into existing IAM and CIAM systems and the role of mDLs, EUDI, and verifiable credentials.
A paper offers a model designed to protect behavioral, biometric, and personality-based digital likeness attributes to address the need as generative AI and its products become more widespread. The “Digital Identity Rights Framework” (DIRF) sets out a framework for digital identity protection and clone governance in agentic AI systems. Formulated by a team of researchers from academia and companies including Nokia, Deloitte, and J.P. Morgan, the paper is available on arxiv. It defines 63 enforceable identity-centric controls across nine domains, with each control categorized as legal, technical, or hybrid. These controls enable flexible adoption in real-world AI systems, covering areas such as identity consent, model training governance, traceability, memory drift, and monetization enforcement.
Interestingly, the framework not only aims to protect human identity but also improves AI system performance. According to evaluations, the DIRF framework substantially enhances LLM performance across metrics, achieving greater prompt reliability and execution stability. The authors outline an implementation roadmap and how it can be operationalized in AI systems, as DIRF is compatible with AI security layers such as NIST AI RMF and OWASP LLM Top 10.Â
Q: What is the main issue with current AI agents like ChatGPT?
A: Current AI agents like ChatGPT rely on screen scraping and browser automation to impersonate users and log in on their behalf, which can compromise user security and train users to give their credentials to third parties.
Q: What is the Know Your Agent framework proposed by Vouched?
A: The Know Your Agent framework, proposed by Vouched, is a set of specifications that enable durable, scoped authorizations for AI agents, ensuring clear identification of the agent’s credentials and a reporting mechanism for feedback.
Q: Why is it important to track the reputation of AI agents?
A: Tracking the reputation of AI agents is important to flag and prevent malicious or fraudulent agents, ensuring a safer ecosystem for users.
Q: What is the Digital Identity Rights Framework (DIRF)?
A: The Digital Identity Rights Framework (DIRF) is a model designed to protect behavioral, biometric, and personality-based digital likeness attributes in agentic AI systems, formulated by researchers from academia and industry.
Q: How does the DIRF framework improve AI system performance?
A: The DIRF framework enhances LLM performance across metrics, achieving greater prompt reliability and execution stability, while also protecting human identity and ensuring ethical use of AI.Â