ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™Insider Threat Matrix™
  • ID: IF028
  • Created: 03rd March 2026
  • Updated: 03rd March 2026
  • Contributor: The ITM Team

Delegated Execution via Artificial Intelligence Agents

A subject causes organizational harm by delegating the execution of an infringement to an artificial intelligence (AI) agent, including in circumstances where the agent possesses access or authority beyond the subject’s own direct permissions.

 

This behavior occurs when a subject authorizes, configures, or directs an AI agent to perform operational actions inside the trusted environment that result in unauthorized access, data loss, fraud, operational disruption, or other policy-violating impact. The AI agent executes the harmful activity on the subject’s behalf.

 

An AI agent, in this context, is a system capable of autonomously or semi-autonomously performing structured tasks, interacting with enterprise systems, invoking APIs, chaining actions across platforms, or maintaining persistent monitoring logic. Unlike simple prompt-based AI use, the agent is empowered to act within organizational systems.

 

In certain environments, AI agents are deployed with elevated or system-level permissions to enable productivity, indexing, analytics, or workflow automation. A subject may intentionally leverage this broader authority to access data, systems, or functionality that exceeds their own interactive role-based access. When such delegated activity results in material harm, it constitutes an infringement under this Section.

 

Examples include:

 

  • Directing an AI agent with repository-wide indexing permissions to aggregate sensitive documents outside the subject’s legitimate need.
  • Leveraging an AI agent’s service account privileges to enumerate restricted datasets.
  • Tasking an AI agent integrated with identity or ticketing systems to extract privileged operational information.
  • Using an AI agent’s cross-platform automation authority to stage or transfer data at scale.

 

The defining characteristic is that the harmful act is executed by the AI agent under authority granted or exploited by the subject. The subject extends the organization’s trust boundary to an autonomous system and operationalizes it to inflict harm.

 

The subject remains fully accountable for the resulting impact. The AI agent amplifies speed, scale, and efficiency, and in cases of elevated agent permissions, may enable privilege amplification beyond the subject’s direct access.

Subsections (3)

ID Name Description
IF028.003AI Agent Impersonation Execution

A subject commits an infringement by delegating impersonation activity to an artificial intelligence (AI) agent that autonomously or semi-autonomously executes deceptive communications within or outside the organization.

 

This behavior occurs when a subject configures or tasks an AI agent to replicate the identity, tone, authority, or communication style of another individual (such as an executive, HR representative, legal counsel, or trusted colleague) and the agent executes impersonation actions that result in material harm.

 

The AI agent may be directed to:

 

  • Learn or replicate a specific identity based on internal communications.
  • Generate context-aware communications dynamically.
  • Automatically send or respond to messages.
  • Adapt content based on recipient replies.
  • Sustain multi-step interactions without direct manual drafting by the subject.

 

Unlike manual impersonation, this behavior involves delegated execution. The AI agent operates as the impersonation engine, producing and transmitting deceptive content at scale or with persistence beyond what the subject could realistically maintain manually.

 

Examples include:

 

  • An AI agent generating and dispatching executive-style requests for financial transfers.
  • Automated conversations designed to solicit credentials or sensitive documents.
  • AI-driven responses to follow-up questions that maintain the credibility of a fabricated identity.
  • Persistent impersonation campaigns targeting internal departments or external partners.

 

The infringement is established when the AI agent executes deceptive communications that result in fraud, credential compromise, unauthorized disclosure, reputational harm, or operational disruption.

 

The defining characteristic is the autonomous execution of impersonation through an AI agent acting under the subject’s direction.

 

The subject remains fully accountable for the deception and resulting harm. The AI agent amplifies realism, adaptability, and scale, significantly increasing the effectiveness and persistence of impersonation-based misconduct.

IF028.001AI Agent Internal Reconnaissance

A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.

 

This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.

 

Unlike routine search or manual browsing, AI agent internal reconnaissance enables:

 

  • Cross-repository correlation of fragmented data.
  • Inference generation from distributed signals.
  • Relationship mapping between people, systems, and initiatives.
  • Aggregation of intelligence from platforms the subject does not routinely access.
  • Persistent monitoring for developments related to sensitive topics.

 

In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.

 

Examples include:

 

  • Tasking an AI agent to determine whether a confidential acquisition is underway by correlating calendar entries, procurement tickets, and legal document metadata.
  • Directing an agent to summarize all references to an internal investigation across multiple repositories.
  • Instructing an agent to infer likely restructuring plans based on hiring freezes, budget adjustments, and executive communications.
  • Using an AI platform’s enterprise-wide indexing capability to identify sensitive project names or legal matters outside the subject’s department.

 

The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.

 

The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries.

IF028.002AI Agent Privilege Exploitation

A subject commits an infringement by exploiting the elevated, aggregated, or differently scoped permissions of an artificial intelligence (AI) agent to obtain access to restricted data or systems beyond their authorized role.

 

This behavior occurs when an AI agent operates with service account privileges, enterprise-wide indexing authority, cross-platform integrations, or API-level permissions that exceed the subject’s direct interactive access. The subject intentionally leverages that authority to retrieve, view, or extract protected information.

 

The infringement is established when the AI agent accesses restricted repositories, datasets, or systems that the subject could not lawfully access using their own credentials. The harm lies in the bypass of role-based access controls through delegated authority.

 

Examples include:

 

  • Using an enterprise AI platform with organization-wide document indexing to retrieve files from restricted executive, legal, or HR repositories.
  • Directing an AI-integrated service account to query databases unavailable to the subject’s user account.
  • Leveraging AI platform integrations with identity or HR systems to obtain sensitive personnel or compensation data outside the subject’s authorization.
  • Extracting restricted documents through the AI interface that are not visible through the subject’s standard application access.

 

The defining characteristic is delegated access control bypass. The AI agent exercises permissions that differ from or exceed the subject’s own access scope, and the subject exploits that differential to obtain protected information.

 

The subject remains fully accountable for the misuse of the agent’s authority. The infringement arises from leveraging expanded system trust to circumvent established access controls.