ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™Insider Threat Matrix™
  • ID: DT113
  • Created: 28th April 2025
  • Updated: 28th April 2025
  • Contributor: The ITM Team

Tracking Patterns of Policy Violations

Monitor and analyze minor policy violations over time to detect emerging behavioral patterns that may indicate boundary testing, behavioural drift, or preparation for more serious misconduct. Isolated minor infringements may appear benign, but repeated or clustered incidents can signal a developing threat trajectory.

 

Detection Methods

  • Maintain centralized logging of all recorded policy violations, including low-severity infractions, within case management, HR, or security systems.
  • Implement analytical tools or workflows that flag individuals with multiple minor violations within defined timeframes (e.g., repeated unauthorized device use, bypassing security protocols, small unauthorized disclosures).
  • Correlate minor violation data with other risk indicators such as unauthorized access attempts, changes in behavioral baselines, or indicators of disgruntlement.
  • Analyze patterns across teams, units, or operational areas to detect systemic issues or cultural tolerance of rule-breaking behaviors.
  • Conduct periodic behavioral risk reviews that explicitly include minor infractions as part of insider threat monitoring programs.
  •  

Indicators

  • Subjects accumulating multiple low-level infractions without corresponding corrective action or behavioral improvement.
  • Increased frequency or severity of minor violations over time, suggesting desensitization or emboldenment.
  • Violations spanning multiple domains (e.g., IT security, operational protocols, HR policy), indicating generalized disregard for rules.
  • Evidence that minor violations are clustered around operational pressures, major organizational changes, or periods of reduced oversight.

Sections

ID Name Description
MT022Boundary Testing

The subject deliberately pushes or tests organizational policies, rules, or controls to assess tolerance levels, detect oversight gaps, or gain a sense of impunity. While initial actions may appear minor or exploratory, boundary testing serves as a psychological and operational precursor to more serious misconduct.

 

Characteristics

  • Motivated by curiosity, challenge-seeking, or early-stage dissatisfaction.
  • Actions often start small: minor policy violations, unauthorized accesses, or circumvention of procedures.
  • Rationalizations include beliefs that policies are overly rigid, outdated, or unfair.
  • Boundary testing behavior may escalate if it is unchallenged, normalized, or inadvertently rewarded.
  • Subjects often seek to gauge the likelihood and severity of consequences before considering larger or riskier actions.
  • Testing may be isolated or gradually evolve into opportunism, retaliation, or deliberate harm.

 

Example Scenario

A subject repeatedly circumvents minor IT security controls (e.g., bypassing content filters, using personal devices against policy) without immediate consequences. Encouraged by the lack of enforcement, the subject later undertakes unauthorized data transfers, rationalizing the behavior based on perceived inefficiencies and low risk of detection.

PR029Persistent Access via Bots

The subject exploits their technical role to deploy or manipulate automated bots within the organization’s environment—most commonly within collaboration platforms (e.g., Slack, Teams, Discord) or internal operational systems (e.g., Jira, ServiceNow, Helpdesk tooling). These bots are designed to persist beyond the subject’s tenure, leveraging independent service credentials (or other credentials not specifically associated to a user), webhook integrations, or unattended workflows to maintain covert access.

 

The subject may create new bots under the guise of legitimate productivity enhancements, or hijack existing integrations to expand data access, redirect output, or embed hidden monitoring functionality. Once active, these bots operate continuously, harvesting internal conversations, extracting files, or polling sensitive endpoints—often without triggering standard audit alerts tied to user accounts.

 

Because automation accounts are rarely subject to the same identity governance or offboarding scrutiny as human users, this technique enables long-term persistence, broad data visibility, and operational concealment, facilitating continued access or covert surveillance after the subject’s departure.

MT015.001Opportunism

The subject exploits circumstances for personal gain, convenience, or advantage, often without premeditation or major malicious intent. Opportunistic acts typically arise from perceived gaps in oversight, immediate personal needs, or desires, rather than long-term ideological, financial, or revenge-driven motivations.

 

Characteristics

  • Motivated by immediate self-interest rather than deep-seated grievance or ideology.
  • May rationalize actions as minor, justified, or harmless ("no one will notice," "this helps everyone," "it's not a big deal").
  • Often triggered by environmental factors such as poor oversight, operational stress, or unmet personal needs.
  • May escalate over time if not detected and corrected early.
  • Subjects often do not view themselves as "threat actors" and may retain a positive view of their organization.
  •  

Example Scenario

Senior enlisted personnel on a U.S. Navy warship collaborated to procure and install unauthorized satellite internet equipment (Starlink) to improve their onboard quality of life. Acting without command approval, they circumvented Navy IT security protocols, introducing significant operational security (OPSEC) risks. Their motive was personal convenience rather than espionage, sabotage, or financial gain.

IF027.005Destructive Malware Deployment

The subject deploys destructive malware; software designed to irreversibly damage systems, erase data, or disrupt operational availability. Unlike ransomware, which encrypts files to extort payment, destructive malware is deployed with the explicit intent to delete, corrupt, or disable systems and assets without recovery. Its objective is disruption or sabotage, not necessarily for direct financial gain.

 

This behavior may include:

 

  • Wiper malware (e.g. HermeticWiper, WhisperGate, ZeroCleare)
  • Logic bombs or time-triggered deletion scripts
  • Bootloader overwrite tools or UEFI tampering utilities
  • Mass delete or format scripts (format, cipher /w, del /s /q, rm -rf)
  • Data corruption utilities (e.g. file rewriters, header corruptors)
  • Credential/system-wide lockout scripts (e.g. disabling accounts, resetting passwords en masse)

 

Insiders may deploy destructive malware as an act of retaliation (e.g. prior to departure), sabotage (e.g. to disrupt an investigation or competitor), or under coercion. Detonation may be manual or scheduled, and in some cases the malware is disguised as routine tooling to delay detection.

 

Destructive deployment is high-severity and often coincides with forensic tampering or precursor access based infringements (e.g. file enumeration or backup deletion).

AF029.002Unauthorized VPN Usage

The subject deliberately uses Virtual Private Network (VPN) technology in a manner that circumvents organizational oversight, masking the nature, destination, or content of network activity. This includes installing unapproved VPN clients, as well as reconfiguring sanctioned VPN software to route traffic through unauthorized exit nodes, personal infrastructure, or third-party services not governed by corporate policy.

 

By diverting traffic away from monitored pathways, the subject obstructs standard telemetry collection - evading logging of session destinations, data transfers, or identity-bound usage. This behavior frustrates forensic reconstruction, hinders real-time monitoring, and degrades the reliability of investigative artifacts. Unauthorized VPN usage is an intentional anti-forensics measure aimed at concealing potentially harmful activity behind layers of encrypted and unsanctioned transit.

PR035.001AI Agent Data Staging

A subject prepares for potential insider activity by directing an artificial intelligence (AI) agent to aggregate, organize, or transform sensitive organizational data into structured or portable formats.

 

This behavior occurs when an AI agent is tasked with systematically collecting information from internal repositories and consolidating it into outputs that are easier to store, review, transfer, or exploit. The agent performs bulk summarization, data normalization, or cross-repository aggregation that significantly reduces the effort required to later misuse the information.

 

Unlike reconnaissance activities that focus on discovering intelligence, AI Agent Data Staging focuses on operational preparation of data. The AI agent converts dispersed or complex internal information into consolidated outputs that increase its portability, usability, or accessibility outside its original context.

 

Examples include:

 

  • Directing an AI agent to compile documents from multiple internal repositories into a consolidated report or briefing.
  • Aggregating large volumes of internal documentation into structured summaries or datasets.
  • Transforming proprietary knowledge bases or technical documentation into simplified formats suitable for external distribution.
  • Generating derivative outputs that remove contextual safeguards such as system dependencies, formatting controls, or embedded metadata.
  • Organizing large collections of files or records into categorized outputs intended for later retrieval or transfer.

 

The defining characteristic of this Sub-section is the delegated consolidation of sensitive information. The subject leverages the AI agent to perform scalable data preparation that increases the volume, portability, or usability of organizational data.

 

While the staged data may not yet have been transferred outside the organization, the consolidation process materially lowers the effort required to exfiltrate or exploit it. In environments where AI platforms possess broad repository visibility, this capability can significantly accelerate the preparation phase of insider activity.

IF028.001AI Agent Internal Reconnaissance

A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.

 

This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.

 

Unlike routine search or manual browsing, AI agent internal reconnaissance enables:

 

  • Cross-repository correlation of fragmented data.
  • Inference generation from distributed signals.
  • Relationship mapping between people, systems, and initiatives.
  • Aggregation of intelligence from platforms the subject does not routinely access.
  • Persistent monitoring for developments related to sensitive topics.

 

In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.

 

Examples include:

 

  • Tasking an AI agent to determine whether a confidential acquisition is underway by correlating calendar entries, procurement tickets, and legal document metadata.
  • Directing an agent to summarize all references to an internal investigation across multiple repositories.
  • Instructing an agent to infer likely restructuring plans based on hiring freezes, budget adjustments, and executive communications.
  • Using an AI platform’s enterprise-wide indexing capability to identify sensitive project names or legal matters outside the subject’s department.

 

The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.

 

The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries.