ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™Insider Threat Matrix™
  • ID: PV081
  • Created: 24th October 2025
  • Updated: 24th October 2025
  • Contributor: The ITM Team

AI Usage Policy

An AI Usage Policy is a formally adopted organizational policy that governs the appropriate, sanctioned, and secure use of artificial intelligence systems, tools, and models by members of the population. It is designed to preempt misuse, establish accountability, and reduce ambiguity around the use of generative models, AI-enhanced tooling, and decision-automation systems within the organizational environment.

 

A comprehensive AI Usage Policy mitigates insider risk by codifying restrictions on data input, model interaction, model deployment, and tool integration, especially in contexts involving sensitive data, proprietary logic, or externally facing automation. In the absence of such guidance, subjects may unintentionally or deliberately disclose confidential information to public models, delegate sensitive decisions to unsanctioned systems, or introduce unmanaged shadow AI tooling into operational workflows.

 

Key Prevention Measures:

  • Policy Enumeration: The AI Usage Policy must follow an enumerated structure, allowing investigators and stakeholders to precisely reference policy clauses when documenting or responding to AI-related infringements.
  • Permitted Use Cases: Clearly define which AI tools are approved, for which functions, and under what conditions. Distinctions should be made between internal, sanctioned AI deployments and public, third-party platforms (e.g. ChatGPT, Copilot).
  • Data Input Restrictions: Prohibit the entry of regulated data types (e.g. PII, PHI, classified content) into external or non-controlled AI systems. This restriction must be backed by enforceable policy language and reinforced through data labeling and access classification.
  • Model Interaction Safeguards: Where use of generative AI is permitted, require integration into systems that log prompts, maintain user attribution, and support retrospective investigation in the event of policy violation or data leakage.
  • Tool Approval Workflows: Require that new AI-enabled tools undergo formal security, legal, and operational review before deployment. This ensures visibility and governance over the introduction of capabilities that could affect risk posture.
  • Change Control and Oversight: Governance responsibility should reside with a designated internal body, empowered to interpret, update, and enforce AI policy clauses in alignment with evolving capabilities and threats.

Sections

ID Name Description
IF001.006Exfiltration via Generative AI Platform

The subject transfers sensitive, proprietary, or classified information into an external generative AI platform through text input, file upload, API integration, or embedded application features. This results in uncontrolled data exposure to third-party environments outside organizational governance, potentially violating confidentiality, regulatory, or contractual obligations.

 

Characteristics

  • Involves manual or automated transfer of sensitive data through:
  • Web-based AI interfaces (e.g., ChatGPT, Claude, Gemini).
  • Upload of files (e.g., PDFs, DOCX, CSVs) for summarization, parsing, or analysis.
  • API calls to generative AI services from scripts or third-party SaaS integrations.
  • Embedded AI features inside productivity suites (e.g., Copilot in Microsoft 365, Gemini in Google Workspace).
  • Subjects may act with or without malicious intent—motivated by efficiency, convenience, curiosity, or deliberate exfiltration.
  • Data transmitted may be stored, cached, logged, or used for model retraining, depending on provider-specific terms of service and API configurations.
  • Exfiltration through generative AI channels often evades traditional DLP (Data Loss Prevention) patterns due to novel data formats, variable input methods, and encrypted traffic.

 

Example Scenario

A subject copies sensitive internal financial projections into a public generative AI chatbot to "optimize" executive presentation materials. The AI provider, per its terms of use, retains inputs for service improvement and model fine-tuning. Sensitive data—now stored outside corporate control—becomes vulnerable to exposure through potential data breaches, subpoena, insider misuse at the service provider, or future unintended model outputs.