ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™Insider Threat Matrix™
  • ID: IF027.001
  • Created: 01st October 2025
  • Updated: 02nd October 2025
  • Platforms: Windows, Linux, MacOS, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI),
  • Contributor: The ITM Team

Infostealer Deployment

The subject deploys credential-harvesting malware (commonly referred to as an infostealer) to extract sensitive authentication material or session artifacts from systems under their control. These payloads are typically configured to capture data from browser credential stores (e.g., Login Data SQLite databases in Chromium-based browsers), password vaults (e.g., KeePass, 1Password), clipboard buffers, Windows Credential Manager, or the Local Security Authority Subsystem Service (LSASS) memory space.

 

Infostealers may be executed directly via compiled binaries, staged through malicious document macros, or loaded reflectively into memory using PowerShell, .NET assemblies, or process hollowing techniques. Some variants are fileless and reside entirely in memory, while others create persistence via registry keys (e.g., HKCU\Software\Microsoft\Windows\CurrentVersion\Run) or scheduled tasks.

 

While often associated with external threat actors, insider deployment of infostealers allows subjects to bypass authentication safeguards, impersonate peers, or exfiltrate internal tokens for later use or sale. In cases where data is not immediately exfiltrated, local staging (e.g., in %AppData%, %Temp%, or encrypted containers) may indicate an intent to transfer data offline or deliver it via alternate channels.

Prevention

ID Name Description
PV015Application Whitelisting

By only allowing pre-approved software to be installed and run on corporate devices, the subject is unable to install software themselves.

PV056Azure Conditional Access Policies

Azure Conditional Access provides organizations with a powerful tool to enforce security policies based on various factors, including user behavior, device compliance, and location. These policies can be configured through the Azure Active Directory (Azure AD) portal and are typically applied to cloud-based applications, SaaS platforms, and on-premises resources that are integrated with Azure AD.

 

To configure Conditional Access policies, administrators first define the conditions that trigger the policy, such as:

  • User or group membership: Applying policies to specific users or groups within the organization.
  • Sign-in risk: Assessing user sign-in risk levels, such as unfamiliar locations or suspicious behaviors, and enforcing additional controls like MFA.
  • Device compliance: Ensuring only compliant devices (those managed through Intune or similar tools) can access organizational resources.
  • Location: Restricting access based on trusted or untrusted IP addresses and geographic locations, blocking risky or suspicious login attempts.

 

Once conditions are set, administrators can then specify the actions to take, such as requiring MFA, blocking access, or allowing access only from compliant devices. For example, an organization could require MFA when accessing Microsoft 365 or other cloud applications from an unmanaged device or high-risk location.

 

Conditional Access policies are configured through the Azure AD portal and can be applied to a variety of platforms and services, including (but not limited to):

  • Microsoft 365 (e.g., Exchange, SharePoint, Teams)
  • Azure services (e.g., Azure Storage, Azure Virtual Machines)
  • Third-party SaaS applications integrated with Azure AD
PV021DNS Filtering

Domain Name System (DNS) filtering allows the blocking of domain resolution for specific domains or automatically categorized classes of domains (depending on the functionality of the software or appliance being used). DNS filtering prevents users from accessing blocked domains, regardless of the IP address the domains resolve to.

 

Examples of automatically categorized classes of domains are ‘gambling’ or ‘social networking’ domains. Automatic categorizations of domains are typically conducted by the software or appliance being used, whereas specific domains can be blocked manually. Most DNS filtering software or appliances will provide the ability to use Regular Expressions (RegEx) to (for example) also filter all subdomains on a specified domain.

DNS filtering can be applied on an individual host, such as with the hosts file, or for multiple hosts via a DNS server or firewall.

PV055Enforce Multi-Factor Authentication (MFA)

Multi-Factor Authentication (MFA) is a critical component of a comprehensive security strategy, providing an additional layer of defense by requiring more than just a password for system access. This multi-layered approach significantly reduces the risk of unauthorized access, especially in cases where an attacker has obtained or guessed a user’s credentials. MFA is particularly valuable in environments where attackers may have gained access to user credentials via phishing, data breaches, or social engineering.

 

For organizations, enabling MFA across all critical systems is essential. This includes systems such as Active Directory, VPNs, cloud platforms (e.g., AWS, Azure, Google Cloud), internal applications, and any resources that store sensitive data. MFA ensures that access control is not solely dependent on passwords, which are vulnerable to compromise. Systems that are protected by MFA require users to authenticate via at least two separate factors: something they know (e.g., a password), and something they have (e.g., a hardware token or a mobile device running an authenticator app).

 

The strength of MFA depends heavily on the factors chosen. Hardware-based authentication devices, such as FIDO2 or U2F security keys (e.g., YubiKey), offer a higher level of security because they are immune to phishing attacks. These keys use public-key cryptography, meaning that authentication tokens are never transmitted over the network, reducing the risk of interception. In contrast, software-based MFA solutions, like Google Authenticator or Microsoft Authenticator, generate one-time passcodes (OTPs) that are time-based and typically expire after a short window (e.g., 30 seconds). While software-based tokens offer a strong level of security, they can be vulnerable to device theft or compromise if not properly secured.

 

To maximize the effectiveness of MFA, organizations should integrate it with their Identity and Access Management (IAM) system. This ensures that MFA is uniformly enforced across all access points, including local and remote access, as well as access for third-party vendors or contractors. Through integration, organizations can enforce policies such as requiring MFA for privileged accounts (e.g., administrators), as these accounts represent high-value targets for attackers seeking to escalate privileges within the network.

 

It is equally important to implement adaptive authentication or risk-based MFA, where the system dynamically adjusts its security requirements based on factors such as user behavior, device trustworthiness, or geographic location. For example, if a subject logs in from an unusual location or device, the system can automatically prompt for an additional factor, further reducing the likelihood of unauthorized access.

 

Regular monitoring and auditing of MFA usage are also critical. Organizations should actively monitor for suspicious activity, such as failed authentication attempts or anomalous login patterns. Logs generated by the Authentication Service Providers (ASPs), such as those from Azure AD or Active Directory, should be reviewed regularly to identify signs of attempted MFA bypass, such as frequent failures or the use of backup codes. In addition, setting up alerts for any irregular MFA activity can provide immediate visibility into potential incidents.

 

Finally, when a subject no longer requires access, it is critical that MFA access is promptly revoked. This includes deactivating hardware security keys, unlinking software tokens, and ensuring that any backup codes or recovery methods are invalidated. Integration with the organization’s Lifecycle Management system is essential to automate the deactivation of MFA credentials during role changes or when an employee departs.

PV006Install a Web Proxy Solution

A web proxy can allow for specific web resources to be blocked, preventing clients from successfully connecting to them.

PV005Install an Anti-Virus Solution

An anti-virus solution detect and alert on malicious files, including the ability to take autonomous actions such as quarantining or deleting the flagged file.

PV048Privileged Access Management (PAM)

Privileged Access Management (PAM) is a critical security practice designed to control and monitor access to sensitive systems and data. By managing and securing accounts with elevated privileges, PAM helps reduce the risk of insider threats and unauthorized access to critical infrastructure.

 

Key Prevention Measures:


Least Privilege Access: PAM enforces the principle of least privilege by ensuring users only have access to the systems and data necessary for their role, limiting opportunities for misuse.

  • Just-in-Time (JIT) Access: PAM solutions provide temporary, on-demand access to privileged accounts, ensuring users can only access sensitive environments for a defined period, minimizing exposure.
  • Centralized Credential Management: PAM centralizes the management of privileged accounts and credentials, automatically rotating passwords and securely storing sensitive information to prevent unauthorized access.
  • Monitoring and Auditing: PAM solutions continuously monitor and log privileged user activities, providing a detailed audit trail for detecting suspicious behavior and ensuring accountability.
  • Approval Workflows: PAM incorporates approval processes for accessing privileged accounts, ensuring that elevated access is granted only when justified and authorized by relevant stakeholders.

 

Benefits:


PAM enhances security by reducing the attack surface, improving compliance with regulatory standards, and enabling greater control over privileged access. It provides robust protection for critical systems by limiting unnecessary exposure to high-level access, facilitating auditing and accountability, and minimizing opportunities for both insider and external threats.

PV002Restrict Access to Administrative Privileges

The Principle of Least Privilege should be enforced, and period reviews of permissions conducted to ensure that accounts have the minimum level of access required to complete duties as per their role.

Detection

ID Name Description
DT046Agent Capable of Endpoint Detection and Response

An agent capable of Endpoint Detection and Response (EDR) is a software agent installed on organization endpoints (such as laptops and servers) that (at a minimum) records the Operating System, application, and network activity on an endpoint.

 

Typically EDR operates in an agent/server model, where agents automatically send logs to a server, where the server correlates those logs based on a rule set. This rule set is then used to surface potential security-related events, that can then be analyzed.

 

An EDR agent typically also has some form of remote shell capability, where a user of the EDR platform can gain a remote shell session on a target endpoint, for incident response purposes. An EDR agent will typically have the ability to remotely isolate an endpoint, where all network activity is blocked on the target endpoint (other than the network activity required for the EDR platform to operate).

DT047Agent Capable of User Behaviour Analytics

An agent capable of User Behaviour Analytics (UBA) is a software agent installed on organizational endpoints (such as laptops). Typically, User Activity Monitoring agents are only deployed on endpoints where a human user is expected to conduct the activity.

 

The User Behaviour Analytics agent will typically record Operating System, application, and network activity occurring on an endpoint, focusing on activity that is or can be conducted by a human user. Typically, User Behaviour Analytics platforms operate in an agent/server model where activity logs are sent to a server for automatic analysis. In the case of User Behaviour Analytics, this analysis will typically be conducted against a baseline that has previously been established.

 

A User Behaviour Analytic platform will typically conduct a period of ‘baselining’ when the platform is first installed. This baselining period establishes the normal behavior parameters for an organization’s users, which are used to train a Machine Learning (ML) model. This ML model can then be later used to automatically identify activity that is predicted to be an anomaly, which is hoped to surface user behavior that is undesirable, risky, or malicious.

 

Other platforms providing related functionality are frequently referred to as User Activity Monitoring (UAM) platforms.

DT011Cyber Deception, Honey User

In cyber deception, a "honey user" (or "honey account") is a decoy user account designed to detect and monitor malicious activities. These accounts attract attackers by appearing legitimate or using common account names, but any interaction with them is highly suspicious and flagged for investigation. Honey users can be deployed in various forms, such as Active Directory users, local system accounts, web application users, and cloud users.

DT097Deep Packet Inspection

Implement Deep Packet Inspection (DPI) tools to inspect the content of network packets beyond the header information. DPI can identify unusual patterns and hidden data within legitimate protocols. DPI can be conducted with a range of software and hardware solutions, such as Unified Threat Management (UTM) and Next-Generation Firewalls (NGFWs), as well as Intrusion Detection and Prevention Systems (IDPS) such as Snort and Suricata, 

DT051DNS Logging

Logging DNS requests made by corporate devices can assist with identifying what web resources a system has attempted to or successfully accessed.

DT096DNS Monitoring

Monitor outbound DNS traffic for unusual or suspicious queries that may indicate DNS tunneling. DNS monitoring entails observing and analyzing Domain Name System (DNS) queries and responses to identify abnormal or malicious activities. This can be achieved using various security platforms and network appliances, including Network Intrusion Detection Systems (NIDS), specialized DNS services, and Security Information and Event Management (SIEM) systems that process DNS logs.

DT146File Integrity Monitoring

File Integrity Monitoring (FIM) is a technical prevention mechanism designed to detect unauthorized modification, deletion, or creation of files and configurations on monitored systems. The most basic implementation method is cryptographic hash comparison, where a known-good baseline (typically SHA256 or SHA1) is calculated and stored for monitored files. At regular intervals (or in real time) current file states are re-hashed and compared to the baseline. Any discrepancy in hash value, size, permissions, or timestamp is flagged as an integrity violation.

While hash comparison is foundational, mature File Integrity Monitoring (FIM) solutions incorporate additional telemetry and instrumentation to increase forensic depth, reduce false positives, and support attribution:

 

  • ACL and Permission Monitoring: Captures unauthorized changes to file ownership, execution flags (e.g. chmod +x), NTFS permissions, or group inheritance, critical for detecting silent privilege escalation.
  • Timestamp Integrity Checks: Monitors for retroactive or unnatural changes to creation, modification, and access timestamps, commonly associated with anti-forensic behaviors such as timestomping.
  • Event-based Hooks: Leverages OS-native event subsystems (e.g. Windows ETW, USN Journal; Linux inotify, auditd, fanotify) to trigger high-fidelity alerts on file system activity without waiting for interval-based scans.
  • Process Attribution: Enriches FIM events with the user identity, process name, PID, and command line responsible for the change, enabling precise correlation with session logs, drift indicators, and subject behavior.
  • Snapshot or Versioned Comparisons: Enables file state diffing across time, including rollback of modified artifacts or analysis of change sequences (common in forensic suites and some EDR platforms).

 

To be effective in insider threat contexts, File Integrity Monitoring should be explicitly tuned to monitor (at minimum):

 

  • Executable and script directories (%ProgramFiles%, %APPDATA%, /usr/local/bin/, /opt/)
  • Configuration and runtime paths (/etc/, C:\Windows\System32\Config, container volumes)
  • Security logs, audit trails, and telemetry agents (.evtx, /var/log/, SIEM client logs)
  • Credential storage and secrets locations (browser credential stores, password vaults, keyrings, .env files)
  • Backup and recovery tooling (scripts, snapshot schedulers, and volume metadata)

 

In ransomware or destruction scenarios, File Integrity Monitoring can detect the early stages of detonation by identifying rapid, high-volume file modifications and hash changes, particularly in mapped drives, document repositories, and shared storage. This can serve as a trigger for containment actions and/or investigation before full encryption completes, especially when correlated with process telemetry and known ransomware behaviors (e.g. deletion of shadow copies, entropy spikes).

 

When tuned and deployed appropriately, File Integrity Monitoring provides a high-fidelity signal of tampering, staging, or covert access attempts, even when other telemetry (e.g. signature-based detection or anomaly modeling) fails to trigger. This makes it particularly valuable in environments where subjects have elevated access, control over telemetry agents, or knowledge of investigative blind spots.

DT098NetFlow Analysis

Analyze network flow data (NetFlow) to identify unusual communication patterns and potential tunneling activities. Flow data offers insights into the volume, direction, and nature of traffic.

 

NetFlow, a protocol developed by Cisco, captures and records metadata about network flows—such as source and destination IP addresses, ports, and the amount of data transferred.

 

Various network appliances support NetFlow, including Next-Generation Firewalls (NGFWs), network routers and switches, and dedicated NetFlow collectors.

DT042Network Intrusion Detection Systems

Network Intrusion Detection Systems (NIDS) can alert on abnormal, suspicious, or malicious patterns of network behavior. 

DT102User and Entity Behavior Analytics (UEBA)

Deploy User and Entity Behavior Analytics (UEBA) solutions designed for cloud environments to monitor and analyze the behavior of users, applications, network devices, servers, and other non-human resources. UEBA systems track normal behavior patterns and detect anomalies that could indicate potential insider events. For instance, they can identify when a user or entity is downloading unusually large volumes of data, accessing an excessive number of resources, or engaging in data transfers that deviate from their usual behavior.

DT101User Behavior Analytics (UBA)

Implement User Behavior Analytics (UBA) tools to continuously monitor and analyze user (human) activities, detecting anomalies that may signal security risks. UBA can track and flag unusual behavior, such as excessive data downloads, accessing a higher-than-usual number of resources, or large-scale transfers inconsistent with a user’s typical patterns. UBA can also provide real-time alerts when users engage in behavior that deviates from established baselines, such as accessing sensitive data during off-hours or from unfamiliar locations. By identifying such anomalies, UBA enhances the detection of insider events.

DT039Web Proxy Logs

Depending on the solution used, web proxies can provide a wealth of information about web-based activity. This can include the IP address of the system making the web request, the URL requested, the response code, and timestamps.

An organization must perform SSL/TLS interception to receive the most complete information about these connections.