ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™Insider Threat Matrix™
  • ID: DT146
  • Created: 02nd October 2025
  • Updated: 02nd October 2025
  • Platforms: Windows, Linux, MacOS, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI),
  • Contributor: The ITM Team

File Integrity Monitoring

File Integrity Monitoring (FIM) is a technical prevention mechanism designed to detect unauthorized modification, deletion, or creation of files and configurations on monitored systems. The most basic implementation method is cryptographic hash comparison, where a known-good baseline (typically SHA256 or SHA1) is calculated and stored for monitored files. At regular intervals (or in real time) current file states are re-hashed and compared to the baseline. Any discrepancy in hash value, size, permissions, or timestamp is flagged as an integrity violation.

While hash comparison is foundational, mature File Integrity Monitoring (FIM) solutions incorporate additional telemetry and instrumentation to increase forensic depth, reduce false positives, and support attribution:

 

  • ACL and Permission Monitoring: Captures unauthorized changes to file ownership, execution flags (e.g. chmod +x), NTFS permissions, or group inheritance, critical for detecting silent privilege escalation.
  • Timestamp Integrity Checks: Monitors for retroactive or unnatural changes to creation, modification, and access timestamps, commonly associated with anti-forensic behaviors such as timestomping.
  • Event-based Hooks: Leverages OS-native event subsystems (e.g. Windows ETW, USN Journal; Linux inotify, auditd, fanotify) to trigger high-fidelity alerts on file system activity without waiting for interval-based scans.
  • Process Attribution: Enriches FIM events with the user identity, process name, PID, and command line responsible for the change, enabling precise correlation with session logs, drift indicators, and subject behavior.
  • Snapshot or Versioned Comparisons: Enables file state diffing across time, including rollback of modified artifacts or analysis of change sequences (common in forensic suites and some EDR platforms).

 

To be effective in insider threat contexts, File Integrity Monitoring should be explicitly tuned to monitor (at minimum):

 

  • Executable and script directories (%ProgramFiles%, %APPDATA%, /usr/local/bin/, /opt/)
  • Configuration and runtime paths (/etc/, C:\Windows\System32\Config, container volumes)
  • Security logs, audit trails, and telemetry agents (.evtx, /var/log/, SIEM client logs)
  • Credential storage and secrets locations (browser credential stores, password vaults, keyrings, .env files)
  • Backup and recovery tooling (scripts, snapshot schedulers, and volume metadata)

 

In ransomware or destruction scenarios, File Integrity Monitoring can detect the early stages of detonation by identifying rapid, high-volume file modifications and hash changes, particularly in mapped drives, document repositories, and shared storage. This can serve as a trigger for containment actions and/or investigation before full encryption completes, especially when correlated with process telemetry and known ransomware behaviors (e.g. deletion of shadow copies, entropy spikes).

 

When tuned and deployed appropriately, File Integrity Monitoring provides a high-fidelity signal of tampering, staging, or covert access attempts, even when other telemetry (e.g. signature-based detection or anomaly modeling) fails to trigger. This makes it particularly valuable in environments where subjects have elevated access, control over telemetry agents, or knowledge of investigative blind spots.

Sections

ID Name Description
IF022Data Loss

Data loss refers to the unauthorized, unintentional, or malicious disclosure, exposure, alteration, or destruction of sensitive organizational data caused by the actions of an insider. It encompasses incidents in which critical information—such as intellectual property, regulated personal data, or operationally sensitive content—is compromised due to insider behavior. This behavior may arise from deliberate exfiltration, negligent data handling, policy circumvention, or misuse of access privileges. Data loss can occur through manual actions (e.g., unauthorized file transfers or improper document handling) or through technical vectors (e.g., insecure APIs, misconfigured cloud services, or shadow IT systems).

IF013Disruption of Business Operations

The subject causes interruptions, degradation, or instability in organizational systems, processes, or data flows that impair day‑to‑day operations and affect availability, integrity, or service continuity. This category encompasses non‑exfiltrative and non‑theft forms of disruption, distinct from data exfiltration or malware aimed at permanent destruction.

 

Disruptive actions may include misuse of administrative tools, intentional misconfiguration, suppression of services, logic interference, dependency tampering, or selective disabling of critical functions. The objective is operational impact; slowing, blocking, or misrouting workflows, rather than data removal or theft.

IF027.005Destructive Malware Deployment

The subject deploys destructive malware; software designed to irreversibly damage systems, erase data, or disrupt operational availability. Unlike ransomware, which encrypts files to extort payment, destructive malware is deployed with the explicit intent to delete, corrupt, or disable systems and assets without recovery. Its objective is disruption or sabotage, not necessarily for direct financial gain.

 

This behavior may include:

 

  • Wiper malware (e.g. HermeticWiper, WhisperGate, ZeroCleare)
  • Logic bombs or time-triggered deletion scripts
  • Bootloader overwrite tools or UEFI tampering utilities
  • Mass delete or format scripts (format, cipher /w, del /s /q, rm -rf)
  • Data corruption utilities (e.g. file rewriters, header corruptors)
  • Credential/system-wide lockout scripts (e.g. disabling accounts, resetting passwords en masse)

 

Insiders may deploy destructive malware as an act of retaliation (e.g. prior to departure), sabotage (e.g. to disrupt an investigation or competitor), or under coercion. Detonation may be manual or scheduled, and in some cases the malware is disguised as routine tooling to delay detection.

 

Destructive deployment is high-severity and often coincides with forensic tampering or precursor access based infringements (e.g. file enumeration or backup deletion).

IF027.002Ransomware Deployment

The subject deploys ransomware within the organization’s environment, resulting in the encryption, locking, or destructive alteration of organizational data, systems, or backups. Ransomware used by insiders may be obtained from public repositories, affiliate programs (e.g. RaaS platforms), or compiled independently using commodity builder kits. Unlike external actors who rely on phishing or remote exploitation, insiders often bypass perimeter controls by detonating ransomware from within trusted systems using local access.

 

Ransomware payloads are typically compiled as executables, occasionally obfuscated using packers or crypters to evade detection. Execution may be initiated via command-line, scheduled task, script wrapper, or automated loader. Encryption routines often target common file extensions recursively across accessible volumes, mapped drives, and cloud sync folders. In advanced deployments, the subject may disable volume shadow copies (vssadmin delete shadows) or stop backup agents (net stop) prior to detonation to increase impact.

 

In some insider scenarios, ransomware is executed selectively: targeting specific departments, shares, or systems, rather than broad detonation. This behavior may indicate intent to send a message, sabotage selectively, or avoid attribution. Payment demands may be issued internally, externally, or omitted entirely if disruption is the primary motive.

IF027.001Infostealer Deployment

The subject deploys credential-harvesting malware (commonly referred to as an infostealer) to extract sensitive authentication material or session artifacts from systems under their control. These payloads are typically configured to capture data from browser credential stores (e.g., Login Data SQLite databases in Chromium-based browsers), password vaults (e.g., KeePass, 1Password), clipboard buffers, Windows Credential Manager, or the Local Security Authority Subsystem Service (LSASS) memory space.

 

Infostealers may be executed directly via compiled binaries, staged through malicious document macros, or loaded reflectively into memory using PowerShell, .NET assemblies, or process hollowing techniques. Some variants are fileless and reside entirely in memory, while others create persistence via registry keys (e.g., HKCU\Software\Microsoft\Windows\CurrentVersion\Run) or scheduled tasks.

 

While often associated with external threat actors, insider deployment of infostealers allows subjects to bypass authentication safeguards, impersonate peers, or exfiltrate internal tokens for later use or sale. In cases where data is not immediately exfiltrated, local staging (e.g., in %AppData%, %Temp%, or encrypted containers) may indicate an intent to transfer data offline or deliver it via alternate channels.

IF022.001Intellectual Property Theft

A subject misappropriates, discloses, or exploits proprietary information, trade secrets, creative works, or internally developed knowledge obtained through their role within the organization. This form of data loss typically involves the unauthorized transfer or use of intellectual assets—such as source code, engineering designs, research data, algorithms, product roadmaps, marketing strategies, or proprietary business processes—without the organization's consent.

 

Intellectual property theft can occur during employment or around the time of offboarding, and may involve methods such as unauthorized file transfers, use of personal storage devices, cloud synchronization, or improper sharing with third parties. The consequences can include competitive disadvantage, breach of contractual obligations, and significant legal and reputational harm.

IF022.005Media Leak

The intentional or negligent disclosure of internal data, documents, or communications to members of the press or external media outlets—resulting in the loss of confidentiality, reputational harm, or operational compromise.


Media leaks represent a unique form of data loss. Unlike data exfiltration for financial gain or competitive advantage, this form of loss often involves symbolic targeting, reputational damage, or pressure tactics. Subjects may seek to embarrass the organization, expose internal misconduct, or spark public or political consequences. Leaks may be anonymous, pseudonymous, or openly attributed.

This behavior is sometimes rationalized by the subject as whistleblowing, though it often occurs outside authorized internal reporting channels and in violation of confidentiality agreements, regulatory constraints, or national security laws.


Media leaks blur the line between insider threat and whistleblowing. While some disclosures may raise legitimate ethical concerns, organizations must distinguish between protected disclosures under law (e.g., protected whistle-blower status) and unauthorized leaks that expose sensitive, regulated, or classified information.

These events often generate external investigative pressure (from regulators, media, or lawmakers) and may undermine internal trust—requiring not just forensic containment, but narrative and reputational management.

IF022.004Payment Card Data Leakage

A subject with access to payment environments or transactional data may deliberately or inadvertently leak sensitive payment card information. Payment Card Data Leakage refers to the unauthorized exposure, transmission, or exfiltration of data governed by the Payment Card Industry Data Security Standard (PCI DSS). This includes both Cardholder Data (CHD)—such as the Primary Account Number (PAN), cardholder name, expiration date, and service code—and Sensitive Authentication Data (SAD), which encompasses full track data, card verification values (e.g., CVV2, CVC2, CID), and PIN-related information.

 

Subjects with privileged, technical, or unsupervised access to point-of-sale systems, payment gateways, backend databases, or log repositories may mishandle or deliberately exfiltrate CHD or SAD. In some scenarios, insiders may exploit access to system-level data stores, intercept transactional payloads, or scrape logs that improperly store SAD in violation of PCI DSS mandates. This may include exporting payment data in plaintext, capturing full card data from logs, or replicating data to unmonitored environments for later retrieval.

 

Weak controls, such as the absence of data encryption, improper tokenization of PANs, misconfigured retention policies, or lack of field-level access restrictions, can facilitate misuse by insiders. In some cases, access may be shared or escalated informally, bypassing formal entitlement reviews or just-in-time provisioning protocols. These gaps in security can be manipulated by a subject seeking to leak or profit from payment card data.

 

Insiders may also use legitimate business tools—such as reporting platforms or data exports—to intentionally bypass obfuscation mechanisms or deliver raw payment data to unauthorized recipients. Additionally, compromised service accounts or insider-created backdoors can provide long-term persistence for continued exfiltration of sensitive data.

 

Data loss involving CHD or SAD often trigger mandatory breach disclosures, regulatory scrutiny, and severe financial penalties. They also pose reputational risks, particularly when data loss undermines consumer trust or payment processing agreements. In high-volume environments, even small-scale leaks can result in widespread exposure of customer data and fraud.

IF022.003PHI Leakage (Protected Health Information)

PHI Leakage refers to the unauthorized, accidental, or malicious exposure, disclosure, or loss of Protected Health Information (PHI) by a healthcare provider, health plan, healthcare clearinghouse (collectively, "covered entities"), or their business associates. Under the Health Insurance Portability and Accountability Act (HIPAA) in the United States, PHI is defined as any information that pertains to an individual’s physical or mental health, healthcare services, or payment for those services that can be used to identify the individual. This includes medical records, treatment history, diagnosis, test results, and payment details.

 

HIPAA imposes strict regulations on how PHI must be handled, stored, and transmitted to ensure that individuals' health information remains confidential and secure. The Privacy Rule within HIPAA outlines standards for the protection of PHI, while the Security Rule mandates safeguards for electronic PHI (ePHI), including access controls, encryption, and audit controls. Any unauthorized access, improper sharing, or accidental exposure of PHI constitutes a breach under HIPAA, which can result in significant civil and criminal penalties, depending on the severity and nature of the violation.

 

In addition to HIPAA, other countries have established similar protections for PHI. For example, the General Data Protection Regulation (GDPR) in the European Union protects personal health data as part of its broader data protection laws. Similarly, Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) governs the collection, use, and disclosure of personal health information by private-sector organizations. Australia also has regulations under the Privacy Act 1988 and the Health Records Act 2001, which enforce stringent rules for the handling of health-related personal data.

 

This infringement occurs when an insider—whether maliciously or through negligence—exposes PHI in violation of privacy laws, organizational policies, or security protocols. Such breaches can involve unauthorized access to health records, improper sharing of medical information, or accidental exposure of sensitive health data. These breaches may result in severe legal, financial, and reputational consequences for the healthcare organization, including penalties, lawsuits, and loss of trust.

 

Examples of Infringement:

  • A healthcare worker intentionally accesses a patient's medical records without authorization for personal reasons, such as to obtain information on a celebrity or acquaintance.
  • An employee negligently sends patient health data to the wrong recipient via email, exposing sensitive health information.
  • An insider bypasses security controls to access and exfiltrate medical records for malicious use, such as identity theft or selling PHI on the dark web.
IF022.002PII Leakage (Personally Identifiable Information)

PII (Personally Identifiable Information) leakage refers to the unauthorized disclosure, exposure, or mishandling of information that can be used to identify an individual, such as names, addresses, phone numbers, national identification numbers, financial data, or biometric records. In the context of insider threat, PII leakage may occur through negligence, misconfiguration, policy violations, or malicious intent.

 

Insiders may leak PII by sending unencrypted spreadsheets via email, exporting user records from customer databases, misusing access to HR systems, or storing sensitive personal data in unsecured locations (e.g., shared drives or cloud storage without proper access controls). In some cases, PII may be leaked unintentionally through logs, collaboration platforms, or default settings that fail to mask sensitive fields.

 

The consequences of PII leakage can be severe—impacting individuals through identity theft or financial fraud, and exposing organizations to legal penalties, reputational harm, and regulatory sanctions under frameworks such as GDPR, CCPA, or HIPAA.

 

Examples of Infringement:

  • An employee downloads and shares a list of customer contact details without authorization.
  • PII is inadvertently exposed in error logs or email footers shared externally.
  • HR data containing employee National Insurance or Social Security numbers is copied to a personal cloud storage account.
IF013.001File or Data Deletion

A subject deletes organizational files or data (manually or through tooling) outside authorized workflows, resulting in the loss, concealment, or unavailability of operational assets. This infringement encompasses both targeted deletion (e.g. selected records, logs, or documents) and bulk removal (e.g. recursive deletion of directories or volumes).

 

Unlike Destructive Malware Deployment, which uses self-propagating or malicious code to irreversibly damage systems, this behavior reflects direct user-driven actions or scripts that remove or purge data without employing destructive payloads. Deletions may be conducted via built-in utilities, custom scripts, scheduled tasks, or misuse of administrative tools such as backup managers or version control systems.

 

This activity frequently occurs to:

 

  • Conceal evidence of other infringing actions (e.g. log deletion to frustrate investigation)
  • Sabotage availability of critical information (e.g. deleting shared drives or project directories)
  • Facilitate exfiltration or preparation (e.g. purging redundant files before copying sensitive data)

 

It may also involve secondary actions such as emptying recycle bins, purging shadow copies, disabling version histories, or wiping removable media to obscure the scope of deletion.

IF013.002Operational Disruption Impacting Customers

The subject deliberately interferes with operational systems in ways that degrade, interrupt, or misroute services relied upon by customers, without relying on file deletion or malware. This includes misconfigurations, service disabling, authentication interference, or intentional introduction of latency, instability, or incorrect outputs. The result is operational degradation that directly or indirectly affects service delivery, availability, or trust.

 

Unlike File or Data Deletion, this infringement does not depend on erasing data, and unlike Destructive Malware Deployment, it does not rely on malicious payloads or automated damage. The disruption instead stems from direct manipulation of infrastructure, configurations, service states, or user access.

 

Examples include:

 

  • Intentionally disabling authentication or API endpoints
  • Modifying DNS, firewall, or routing rules to block legitimate traffic
  • Tampering with load balancers or HA/failover logic
  • Altering service configurations to break dependency chains (e.g. pointing production systems to empty dev databases)
  • Injecting false flags into monitoring or orchestration tools to trigger auto-scaling failures or mis-alerts
  • Enabling excessive logging or computation to induce service latency or memory exhaustion
  • Locking critical service accounts, API keys, or secrets in vault systems

 

These actions may be motivated by retaliation, concealment, sabotage, or insider coercion, and often occur in environments where the subject has legitimate system access but uses it to destabilize service delivery covertly.