Preventions
- Home
- - Preventions
- -PV081
- ID: PV081
- Created: 24th October 2025
- Updated: 24th October 2025
- Contributor: The ITM Team
AI Usage Policy
An AI Usage Policy is a formally adopted organizational policy that governs the appropriate, sanctioned, and secure use of artificial intelligence systems, tools, and models by members of the population. It is designed to preempt misuse, establish accountability, and reduce ambiguity around the use of generative models, AI-enhanced tooling, and decision-automation systems within the organizational environment.
A comprehensive AI Usage Policy mitigates insider risk by codifying restrictions on data input, model interaction, model deployment, and tool integration, especially in contexts involving sensitive data, proprietary logic, or externally facing automation. In the absence of such guidance, subjects may unintentionally or deliberately disclose confidential information to public models, delegate sensitive decisions to unsanctioned systems, or introduce unmanaged shadow AI tooling into operational workflows.
Key Prevention Measures:
- Policy Enumeration: The AI Usage Policy must follow an enumerated structure, allowing investigators and stakeholders to precisely reference policy clauses when documenting or responding to AI-related infringements.
- Permitted Use Cases: Clearly define which AI tools are approved, for which functions, and under what conditions. Distinctions should be made between internal, sanctioned AI deployments and public, third-party platforms (e.g. ChatGPT, Copilot).
- Data Input Restrictions: Prohibit the entry of regulated data types (e.g. PII, PHI, classified content) into external or non-controlled AI systems. This restriction must be backed by enforceable policy language and reinforced through data labeling and access classification.
- Model Interaction Safeguards: Where use of generative AI is permitted, require integration into systems that log prompts, maintain user attribution, and support retrospective investigation in the event of policy violation or data leakage.
- Tool Approval Workflows: Require that new AI-enabled tools undergo formal security, legal, and operational review before deployment. This ensures visibility and governance over the introduction of capabilities that could affect risk posture.
- Change Control and Oversight: Governance responsibility should reside with a designated internal body, empowered to interpret, update, and enforce AI policy clauses in alignment with evolving capabilities and threats.
Sections
| ID | Name | Description |
|---|---|---|
| IF028 | Delegated Execution via Artificial Intelligence Agents | A subject causes organizational harm by delegating the execution of an infringement to an artificial intelligence (AI) agent, including in circumstances where the agent possesses access or authority beyond the subject’s own direct permissions.
This behavior occurs when a subject authorizes, configures, or directs an AI agent to perform operational actions inside the trusted environment that result in unauthorized access, data loss, fraud, operational disruption, or other policy-violating impact. The AI agent executes the harmful activity on the subject’s behalf.
An AI agent, in this context, is a system capable of autonomously or semi-autonomously performing structured tasks, interacting with enterprise systems, invoking APIs, chaining actions across platforms, or maintaining persistent monitoring logic. Unlike simple prompt-based AI use, the agent is empowered to act within organizational systems.
In certain environments, AI agents are deployed with elevated or system-level permissions to enable productivity, indexing, analytics, or workflow automation. A subject may intentionally leverage this broader authority to access data, systems, or functionality that exceeds their own interactive role-based access. When such delegated activity results in material harm, it constitutes an infringement under this Section.
Examples include:
The defining characteristic is that the harmful act is executed by the AI agent under authority granted or exploited by the subject. The subject extends the organization’s trust boundary to an autonomous system and operationalizes it to inflict harm.
The subject remains fully accountable for the resulting impact. The AI agent amplifies speed, scale, and efficiency, and in cases of elevated agent permissions, may enable privilege amplification beyond the subject’s direct access. |
| PR035 | Delegated Preparation via Artificial Intelligence Agents | A subject prepares for potential insider activity by delegating preparatory tasks to an artificial intelligence (AI) agent that assist in building operational capability within the trusted environment.
This behavior occurs when a subject directs an AI agent to perform structured tasks that organize information, configure automation workflows, model identities, or otherwise prepare conditions that enable future infringement. The AI agent performs activities that reduce the effort, complexity, or time required to later execute harmful actions.
Unlike direct infringement, these preparatory actions do not themselves cause material organizational impact. Instead, they establish capability or readiness for subsequent misuse. The AI agent acts as a preparatory tool, performing tasks that would otherwise require significant manual effort or may not be practically achievable through manual activity alone.
Examples of delegated preparatory activity include:
The defining characteristic of this Section is delegated preparation. The subject uses the AI agent to build capability that facilitates later insider activity. The agent performs preparatory work that extends the subject’s ability to organize information, automate tasks, or otherwise position themselves to cause loss or harm.
While these actions may initially appear consistent with legitimate productivity use, they can represent early indicators of developing insider threat when the activity exceeds legitimate business purpose or targets sensitive organizational information. |
| ME030 | Enterprise-Integrated AI Platforms | A subject operates within an environment where artificial intelligence (AI) platforms or agents are integrated across multiple enterprise systems, providing centralized access to data, services, or functionality within the organization.
These platforms are typically deployed to support productivity, knowledge retrieval, automation, or decision-making. As part of their implementation, they may be connected to internal repositories, collaboration tools, identity systems, ticketing platforms, or other business-critical services. Integration is often achieved through APIs, service accounts, or enterprise-wide indexing capabilities.
As a result, the AI platform may provide:
This form of integration creates a consolidated access layer within the environment that differs from standard user interaction patterns. Rather than accessing systems individually, the subject may interact with multiple data sources or services through the AI platform.
In some cases, the scope of access available through the platform may not align precisely with role-based access expectations, particularly where data is aggregated, summarized, or retrieved across systems. The platform may also operate with service account permissions or API-level access that are not directly accessible to the subject through traditional interfaces or individual user access controls, creating a divergence between user-level access and effective access via the platform.
This Section captures the availability of AI platforms that are integrated into the enterprise environment with broad access to data or systems. While deployed for legitimate operational purposes, such platforms may provide expanded capability that can be leveraged by a subject in the course of insider activity. |
| IF001.006 | Exfiltration via Generative AI Platform | The subject transfers sensitive, proprietary, or classified information into an external generative AI platform through text input, file upload, API integration, or embedded application features. This results in uncontrolled data exposure to third-party environments outside organizational governance, potentially violating confidentiality, regulatory, or contractual obligations.
Characteristics
Example ScenarioA subject copies sensitive internal financial projections into a public generative AI chatbot to "optimize" executive presentation materials. The AI provider, per its terms of use, retains inputs for service improvement and model fine-tuning. Sensitive data—now stored outside corporate control—becomes vulnerable to exposure through potential data breaches, subpoena, insider misuse at the service provider, or future unintended model outputs. |
| IF028.001 | AI Agent Internal Reconnaissance | A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.
This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.
Unlike routine search or manual browsing, AI agent internal reconnaissance enables:
In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.
Examples include:
The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.
The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries. |
| IF028.002 | AI Agent Privilege Exploitation | A subject commits an infringement by exploiting the elevated, aggregated, or differently scoped permissions of an artificial intelligence (AI) agent to obtain access to restricted data or systems beyond their authorized role.
This behavior occurs when an AI agent operates with service account privileges, enterprise-wide indexing authority, cross-platform integrations, or API-level permissions that exceed the subject’s direct interactive access. The subject intentionally leverages that authority to retrieve, view, or extract protected information.
The infringement is established when the AI agent accesses restricted repositories, datasets, or systems that the subject could not lawfully access using their own credentials. The harm lies in the bypass of role-based access controls through delegated authority.
Examples include:
The defining characteristic is delegated access control bypass. The AI agent exercises permissions that differ from or exceed the subject’s own access scope, and the subject exploits that differential to obtain protected information.
The subject remains fully accountable for the misuse of the agent’s authority. The infringement arises from leveraging expanded system trust to circumvent established access controls. |
| IF028.003 | AI Agent Impersonation Execution | A subject commits an infringement by delegating impersonation activity to an artificial intelligence (AI) agent that autonomously or semi-autonomously executes deceptive communications within or outside the organization.
This behavior occurs when a subject configures or tasks an AI agent to replicate the identity, tone, authority, or communication style of another individual (such as an executive, HR representative, legal counsel, or trusted colleague) and the agent executes impersonation actions that result in material harm.
The AI agent may be directed to:
Unlike manual impersonation, this behavior involves delegated execution. The AI agent operates as the impersonation engine, producing and transmitting deceptive content at scale or with persistence beyond what the subject could realistically maintain manually.
Examples include:
The infringement is established when the AI agent executes deceptive communications that result in fraud, credential compromise, unauthorized disclosure, reputational harm, or operational disruption.
The defining characteristic is the autonomous execution of impersonation through an AI agent acting under the subject’s direction.
The subject remains fully accountable for the deception and resulting harm. The AI agent amplifies realism, adaptability, and scale, significantly increasing the effectiveness and persistence of impersonation-based misconduct. |
| PR035.001 | AI Agent Data Staging | A subject prepares for potential insider activity by directing an artificial intelligence (AI) agent to aggregate, organize, or transform sensitive organizational data into structured or portable formats.
This behavior occurs when an AI agent is tasked with systematically collecting information from internal repositories and consolidating it into outputs that are easier to store, review, transfer, or exploit. The agent performs bulk summarization, data normalization, or cross-repository aggregation that significantly reduces the effort required to later misuse the information.
Unlike reconnaissance activities that focus on discovering intelligence, AI Agent Data Staging focuses on operational preparation of data. The AI agent converts dispersed or complex internal information into consolidated outputs that increase its portability, usability, or accessibility outside its original context.
Examples include:
The defining characteristic of this Sub-section is the delegated consolidation of sensitive information. The subject leverages the AI agent to perform scalable data preparation that increases the volume, portability, or usability of organizational data.
While the staged data may not yet have been transferred outside the organization, the consolidation process materially lowers the effort required to exfiltrate or exploit it. In environments where AI platforms possess broad repository visibility, this capability can significantly accelerate the preparation phase of insider activity. |
| ME030.001 | AI Platform Aggregated Data Access | A subject has access to an artificial intelligence (AI) platform that aggregates data from multiple internal systems and presents it through a unified interface, where access controls are insufficiently enforced or misaligned with underlying role-based access restrictions.
These platforms are typically configured to index, query, or retrieve information from enterprise repositories such as file storage systems, collaboration platforms, knowledge bases, and internal documentation systems. Data from these sources may be combined, summarized, or surfaced in response to a single query.
In some implementations, the platform aggregates data across repositories without consistently applying the access controls of the underlying systems. As a result, information may be surfaced through the AI interface that the subject would not ordinarily access through direct interaction with those systems.
The AI platform may provide:
This access model creates a divergence between the subject’s direct access permissions and the information available to them through the AI platform. Data that is distributed, restricted, or contextually separated within underlying systems may be surfaced together through aggregated queries.
The presence of aggregated data access with insufficiently constrained access controls provides the subject with a means to obtain information beyond their intended role-based scope, particularly where enterprise-wide indexing or broad query capabilities are implemented. |
| ME030.002 | AI Platform System Interaction Capability | A subject has access to an artificial intelligence (AI) platform that is integrated with internal systems and capable of interacting with those systems through APIs, service accounts, automation frameworks, or agent interaction protocols (e.g., Model Context Protocol (MCP)), where the platform operates with permissions or capabilities that exceed typical user-level access controls.
These platforms are connected to enterprise systems such as identity services, ticketing platforms, communication tools, file storage systems, and other operational applications. Integration enables the platform to execute actions, retrieve data, or interact with system functionality on behalf of the user.
In some implementations, the platform is granted broad or persistent permissions to support automation and cross-system functionality. These permissions may not align precisely with the subject’s role-based access and may allow the platform to perform actions or retrieve data beyond what the subject could achieve through direct interaction with the underlying systems.
The AI platform may:
This interaction model creates a divergence between the subject’s direct capabilities and the effective capabilities available through the AI platform. Actions that would normally require elevated access, multi-system coordination, or additional authorization may be performed through the platform’s integrated functionality.
The presence of AI platforms with system interaction capability and insufficiently constrained permissions provides the subject with a means to interact with internal systems and services beyond their intended role-based authority. |