ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™

  • ID: PV001
  • Created: 25th May 2024
  • Updated: 14th June 2024
  • Contributor: The ITM Team

No Ready System-Level Mitigation

This section cannot be readily mitigated at a system level with preventive controls since it is based on the abuse of fundamental features of the system.

Sections

ID Name Description
AF015File Deletion

A subject deletes a file or files to prevent them from being available for later analysis or to disrupt the availability of a system. This could include log files, files downloaded by the subject, files created by the subject, or system files.

ME006Web Access

A subject can access the web with an organization device.

ME011Screenshots

A subject can take screenshots on a device.

ME012Clipboard

A subject can use the clipboard on a device (copy & paste).

AF001Clear Command History

A subject clears command history to prevent executed commands from being reviewed, disclosing information about the subject’s activities.

AF011Physical Destruction of Storage Media

A subject may destroy or otherwise impair physical storage media such as hard drives to prevent them from being analyzed.

AF014System Shutdown

A subject may shutdown a system to clear volatile memory (RAM), preventing memory acquisition and analysis.

PR012Physical Disk Removal

A subject removes the physical disk of a target system to access the target file system with an external device/system.

PR023Suspicious Web Browsing

A subject engages in web searches that may indicate research or information gathering related to potential infringement or anti-forensic activities. Examples include searching for software that could facilitate data exfiltration, methods for deleting or modifying system logs, or techniques to evade security controls. Such activity could signal preparation for a potential insider event.

IF021Harassment and Discrimination

A subject engages in unauthorized conduct that amounts to harassment or discriminatory behavior within the workplace, targeting individuals or groups based on protected characteristics, such as race, gender, religion, or other personal attributes. Incidents of harassment and discrimination may expose the organization to legal risks, potential reputational damage, and regulatory penalties. Additionally, individuals affected by such behavior may be at higher risk of retaliating or disengaging from their work, potentially leading to further insider risks.

PR025File Download

The subject downloads one or more files to a system to access the file or prepare for exfiltration.

ME023Sensitivity Label Leakage

Sensitivity label leakage refers to the exposure or misuse of classification metadata—such as Microsoft Purview Information Protection (MIP) sensitivity labels—through which information about the nature, importance, or confidentiality of a file is unintentionally or deliberately disclosed. While the underlying content of the document may remain encrypted or otherwise protected, the presence and visibility of sensitivity labels alone can reveal valuable contextual information to an insider.

 

This form of leakage typically occurs when files labeled with sensitivity metadata are transferred to insecure locations, shared with unauthorized parties, or surfaced in logs, file properties, or collaboration tool interfaces. Labels may also be leaked through misconfigured APIs, email headers, or third-party integrations that inadvertently expose metadata fields. The leakage of sensitivity labels can help a malicious insider identify and prioritize high-value targets or navigate internal systems with greater precision, without needing immediate access to the protected content.

 

Examples of Use:

  • An insider accesses file properties on a shared drive to identify documents labeled Highly Confidential with the intention of exfiltrating them later.
  • Sensitivity labels are exposed in outbound email headers or logs, revealing the internal classification of attached files.
  • Files copied to an unmanaged device retain their label metadata, inadvertently disclosing sensitivity levels if examined later.
AF001.002Clear Bash History

A subject clears bash terminal command history to prevent executed commands from being reviewed, disclosing information about the subject’s activities.

The Command Prompt on Windows only stores command history within the current session, once Command Prompt is closed, the history is lost.

On Linux-based operating systems different terminal software may store command history in various locations, with the most common being /home/%username%/.bash_history. Using the command history -c will clear the history for the current session, preventing it from being written to .bash_history when the session ends.

On MacOS the Terminal utility will write command history to /Users/%username%/.zsh_history or /Users/%username%/.bash_history based on operating system version.

AF001.001Clear PowerShell History

A subject clears PowerShell command history to prevent executed commands from being reviewed, disclosing information about the subject’s activities.

PowerShell stores command history in the context of a user account. This file is located at C:/Users/%username%/AppData/Roaming/Microsoft/Windows/PowerShell/PSReadline.

A subject can delete their own PSReadline file without any special permissions.

A subject may attempt to use the Clear-History Cmdlet, however this will only clear commands from the current session, does not affect the PSReadline history file.

PR016.001Local Data Staging

A subject stages collected data in a central location or directory local to the current system prior to exfiltration.

PR020.001Renaming Files or Changing File Extensions

A subject may rename a file to obscure the content of the file or change the file extension to hide the file type. This can aid in avoiding suspicion and bypassing certain security filers and endpoint monitoring tools. For example, renaming a sensitive document from FinancialReport.docx to Recipes.txt before copying it to a USB mass storage device.

AF004.001Clear Chrome Artifacts

A subject clears Google Chrome browser artifacts to hide evidence of their activities, such as visited websites, cache, cookies, and download history.

AF004.003Clear Firefox Artifacts

A subject clears Mozzila Firefox browser artifacts to hide evidence of their activities, such as visited websites, cache, cookies, and download history.

AF004.002Clear Edge Artifacts

A subject clears Microsoft Edge browser artifacts to hide evidence of their activities, such as visited websites, cache, cookies, and download history.

PR018.007Downgrading Microsoft Information Protection (MIP) labels

A subject may intentionally downgrade the Microsoft Information Protection (MIP) label applied to a file in order to obscure the sensitivity of its contents and bypass security controls. MIP labels are designed to classify and protect files based on their sensitivity—ranging from “Public” to “Highly Confidential”—and are often used to enforce Data Loss Prevention (DLP), access restrictions, encryption, and monitoring policies.

 

By reducing a file's label classification, the subject may make the file appear innocuous, thus reducing the likelihood of triggering alerts or blocks by email filters, endpoint monitoring tools, or other security mechanisms.

 

This technique can enable the unauthorized exfiltration or misuse of sensitive data while evading established security measures. It may indicate premeditated policy evasion and can significantly weaken the organization’s data protection posture.

 

Examples of Use:

  • A subject downgrades a financial strategy document from Highly Confidential to Public before emailing it to a personal address, bypassing DLP policies that would normally prevent such transmission.
  • A user removes a classification label entirely from an engineering design document to upload it to a non-corporate cloud storage provider without triggering security controls.
  • An insider reclassifies multiple project files from Confidential to Internal Use Only to facilitate mass copying to a removable USB device.

 

Detection Considerations:

  • Monitoring for sudden or unexplained MIP label downgrades, especially in proximity to data transfer events (e.g., email sends, cloud uploads, USB copies).
  • Correlating audit logs from Microsoft Purview (formerly Microsoft Information Protection) with outbound data transfer events.
  • Use of Data Classification Analytics to detect label changes on high-value files without associated business justification.
  • Reviewing file access and modification logs to identify users who have altered classification metadata prior to suspicious activity.
IF022.003PHI Leakage (Protected Health Information)

PHI Leakage refers to the unauthorized, accidental, or malicious exposure, disclosure, or loss of Protected Health Information (PHI) by a healthcare provider, health plan, healthcare clearinghouse (collectively, "covered entities"), or their business associates. Under the Health Insurance Portability and Accountability Act (HIPAA) in the United States, PHI is defined as any information that pertains to an individual’s physical or mental health, healthcare services, or payment for those services that can be used to identify the individual. This includes medical records, treatment history, diagnosis, test results, and payment details.

 

HIPAA imposes strict regulations on how PHI must be handled, stored, and transmitted to ensure that individuals' health information remains confidential and secure. The Privacy Rule within HIPAA outlines standards for the protection of PHI, while the Security Rule mandates safeguards for electronic PHI (ePHI), including access controls, encryption, and audit controls. Any unauthorized access, improper sharing, or accidental exposure of PHI constitutes a breach under HIPAA, which can result in significant civil and criminal penalties, depending on the severity and nature of the violation.

 

In addition to HIPAA, other countries have established similar protections for PHI. For example, the General Data Protection Regulation (GDPR) in the European Union protects personal health data as part of its broader data protection laws. Similarly, Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) governs the collection, use, and disclosure of personal health information by private-sector organizations. Australia also has regulations under the Privacy Act 1988 and the Health Records Act 2001, which enforce stringent rules for the handling of health-related personal data.

 

This infringement occurs when an insider—whether maliciously or through negligence—exposes PHI in violation of privacy laws, organizational policies, or security protocols. Such breaches can involve unauthorized access to health records, improper sharing of medical information, or accidental exposure of sensitive health data. These breaches may result in severe legal, financial, and reputational consequences for the healthcare organization, including penalties, lawsuits, and loss of trust.

 

Examples of Infringement:

  • A healthcare worker intentionally accesses a patient's medical records without authorization for personal reasons, such as to obtain information on a celebrity or acquaintance.
  • An employee negligently sends patient health data to the wrong recipient via email, exposing sensitive health information.
  • An insider bypasses security controls to access and exfiltrate medical records for malicious use, such as identity theft or selling PHI on the dark web.