ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™Insider Threat Matrix™
  • ID: PV078
  • Created: 22nd October 2025
  • Updated: 22nd October 2025
  • Contributor: David Larsen

Service Account Classification and Scope Limitation

Establish and enforce strict classification, ownership, and access scope limitations for all service accounts. These non-human accounts often hold elevated privileges and operate without the same oversight as user accounts. When left ungoverned, they create blind spots in forensic reconstruction, increase the risk of lateral movement, and enable subjects to access sensitive systems without attribution.
 

Service accounts must be treated as operational identities, not technical abstractions. Without rigorous control, they are a frequent vector for privilege misuse, staging, and exfiltration behaviors.


Key Prevention Measures

  • Maintain a centralized inventory of all service accounts using identity providers such as Microsoft Entra ID, Okta, or on-premises Active Directory.
  • Require each service account to have a documented business owner responsible for its purpose and review.
  • Record the account's assigned system or integration point, authentication method, and intended function.
  • Tag all service accounts explicitly in directory metadata as non-human.
  • Block service accounts from interactive login, remote desktop sessions, and GUI-based authentication.
  • Use conditional access policies to restrict service account access to predefined IP ranges and service endpoints only.
  • Require credential rotation on all service accounts using platforms such as CyberArk, HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault.
  • Implement just-in-time provisioning and session expiration for elevated service accounts using Privileged Access Management (PAM) tools.
  • Audit all service account permissions monthly to ensure least-privilege alignment with documented needs.
  • Automatically disable service accounts not used within a defined operational window unless a justified exemption is recorded.
  • Generate alerts when service accounts are used outside expected time windows, from unauthorized locations, or to access sensitive resources unrelated to their documented function.

 

Investigator Considerations

  • Service accounts used interactively are red flags during insider threat investigations, often indicating evasion of attribution or misuse of automation.
  • Misclassified or shared service accounts inhibit incident reconstruction and may obscure which subject initiated a given action.
  • High-volume data access by service accounts should be correlated with staging or exfiltration windows.
  • Accounts with privileged access but no assigned owner should be considered security gaps and reviewed as priority investigative artifacts.

Sections

ID Name Description
AF024Account Misuse

The subject deliberately misuses account constructs to obscure identity, frustrate attribution, or undermine investigative visibility. This includes the use of shared, secondary, abandoned, or illicitly obtained accounts in ways that violate access integrity and complicate forensic analysis.

 

Unlike traditional infringement behaviors, account misuse in the anti-forensics context is not about the action itself—but about how identity is obfuscated or displaced to conceal that action. These behaviors sever the link between subject and activity, impeding both real-time detection and retrospective investigation.

 

  • Common anti-forensic account misuse techniques include:
  • Operating across multiple sanctioned accounts to fragment behavior trails.
  • Using shared service accounts to mask individual actions.
  • Re-activating or leveraging dormant credentials to perform access without attribution.
  • Exploiting misconfigured or ghost accounts left from previous users, contractors, or integrations.

 

Investigators encountering unexplainable log artifacts, attribution conflicts, or unexpected session collisions should assess whether account misuse is being used as a deliberate concealment tactic. Particular attention should be paid in environments lacking centralized identity governance or with known privilege sprawl.

 

Account misuse as an anti-forensics strategy often coexists with more overt infringements—enabling data exfiltration, sabotage, or policy evasion while preserving plausible deniability. As such, its detection is crucial to understanding subject intent, tracing activity with confidence, and restoring the chain of custody in incident response.

AF024.002Unauthorized Credential Use

The subject employs valid credentials that were obtained outside of sanctioned provisioning channels to conceal their identity or perform actions under a false or misleading identity. This behavior, categorized as unauthorized credential use, is distinct from traditional account compromise—it reflects insider-enabled misuse, not external intrusion.

 

Credentials may be acquired through casual observation (e.g., shoulder surfing or unlocked workstations), social engineering, prior access (e.g., retained credentials from a former role), or covert means such as password capture tools. In some cases, credentials may be voluntarily shared by a collaborator or acquired opportunistically from unmonitored or abandoned accounts.

 

This tactic allows the subject to dissociate their actions from their known identity, delay detection, and in some cases, redirect suspicion to another individual. When used within privileged or high-sensitivity environments, unauthorized credential use can enable significant harm while bypassing conventional identity-based controls and alerting mechanisms.

 

Unlike service account sharing or account obfuscation (which involve legitimate, active credentials assigned to the subject), this behavior revolves around unauthorized access to credentials not formally linked to the subject. Investigators should prioritize this sub-section when audit trails show activity under an identity that does not correspond to role expectations, known behavioral patterns, or device history.

 

Key forensic indicators include:

  • Activity under stale or supposedly deactivated credentials.
  • Access from unfamiliar endpoints using accounts with known role assignments.
  • Unusual timing or geographic patterns inconsistent with the account’s assigned user.
  • Discrepancies between identity artifacts (e.g., login metadata) and session content (e.g., typing cadence, application use).

 

Unauthorized credential use is a high-risk concealment technique and often coincides with malicious or high-impact infringements.

IF025.001Service Account Sharing

A subject deliberately shares credentials for non-personal, persistent service accounts (e.g., admin, automation, deployment) with other individuals, either within or outside their team. These accounts often lack individual attribution, and when shared, they create a pool of untracked, unaccountable access.

 

Service account sharing typically emerges in high-pressure operational environments where speed or convenience is prioritized over access hygiene. Teams may rationalize the behavior as necessary to meet deployment deadlines, maintain uptime, or circumvent perceived access bottlenecks. In other cases, access may be extended informally to external collaborators, such as contractors or partner engineers, without proper onboarding or oversight.

 

When service account credentials are distributed, they become functionally equivalent to a shared key—undermining all identity-based controls. Investigators lose the ability to reliably associate actions with individuals, making forensic attribution difficult or impossible. This gap often delays incident response and enables repeated policy violations without detection.

 

Service accounts also frequently carry elevated privileges, operate without MFA, and are excluded from normal UAM logging, compounding the risk. Their use in this manner represents not just a technical misstep, but a structural breakdown in control integrity and accountability. In environments with compliance obligations or segmented access controls, service account sharing is a critical investigative red flag and should trigger formal review.

ME021.001User Account Credentials

User credentials that were available to the subject during employment are not revoked and can still be used.

IF028.002AI Agent Privilege Exploitation

A subject commits an infringement by exploiting the elevated, aggregated, or differently scoped permissions of an artificial intelligence (AI) agent to obtain access to restricted data or systems beyond their authorized role.

 

This behavior occurs when an AI agent operates with service account privileges, enterprise-wide indexing authority, cross-platform integrations, or API-level permissions that exceed the subject’s direct interactive access. The subject intentionally leverages that authority to retrieve, view, or extract protected information.

 

The infringement is established when the AI agent accesses restricted repositories, datasets, or systems that the subject could not lawfully access using their own credentials. The harm lies in the bypass of role-based access controls through delegated authority.

 

Examples include:

 

  • Using an enterprise AI platform with organization-wide document indexing to retrieve files from restricted executive, legal, or HR repositories.
  • Directing an AI-integrated service account to query databases unavailable to the subject’s user account.
  • Leveraging AI platform integrations with identity or HR systems to obtain sensitive personnel or compensation data outside the subject’s authorization.
  • Extracting restricted documents through the AI interface that are not visible through the subject’s standard application access.

 

The defining characteristic is delegated access control bypass. The AI agent exercises permissions that differ from or exceed the subject’s own access scope, and the subject exploits that differential to obtain protected information.

 

The subject remains fully accountable for the misuse of the agent’s authority. The infringement arises from leveraging expanded system trust to circumvent established access controls.

IF028.003AI Agent Impersonation Execution

A subject commits an infringement by delegating impersonation activity to an artificial intelligence (AI) agent that autonomously or semi-autonomously executes deceptive communications within or outside the organization.

 

This behavior occurs when a subject configures or tasks an AI agent to replicate the identity, tone, authority, or communication style of another individual (such as an executive, HR representative, legal counsel, or trusted colleague) and the agent executes impersonation actions that result in material harm.

 

The AI agent may be directed to:

 

  • Learn or replicate a specific identity based on internal communications.
  • Generate context-aware communications dynamically.
  • Automatically send or respond to messages.
  • Adapt content based on recipient replies.
  • Sustain multi-step interactions without direct manual drafting by the subject.

 

Unlike manual impersonation, this behavior involves delegated execution. The AI agent operates as the impersonation engine, producing and transmitting deceptive content at scale or with persistence beyond what the subject could realistically maintain manually.

 

Examples include:

 

  • An AI agent generating and dispatching executive-style requests for financial transfers.
  • Automated conversations designed to solicit credentials or sensitive documents.
  • AI-driven responses to follow-up questions that maintain the credibility of a fabricated identity.
  • Persistent impersonation campaigns targeting internal departments or external partners.

 

The infringement is established when the AI agent executes deceptive communications that result in fraud, credential compromise, unauthorized disclosure, reputational harm, or operational disruption.

 

The defining characteristic is the autonomous execution of impersonation through an AI agent acting under the subject’s direction.

 

The subject remains fully accountable for the deception and resulting harm. The AI agent amplifies realism, adaptability, and scale, significantly increasing the effectiveness and persistence of impersonation-based misconduct.

PR035.001AI Agent Data Staging

A subject prepares for potential insider activity by directing an artificial intelligence (AI) agent to aggregate, organize, or transform sensitive organizational data into structured or portable formats.

 

This behavior occurs when an AI agent is tasked with systematically collecting information from internal repositories and consolidating it into outputs that are easier to store, review, transfer, or exploit. The agent performs bulk summarization, data normalization, or cross-repository aggregation that significantly reduces the effort required to later misuse the information.

 

Unlike reconnaissance activities that focus on discovering intelligence, AI Agent Data Staging focuses on operational preparation of data. The AI agent converts dispersed or complex internal information into consolidated outputs that increase its portability, usability, or accessibility outside its original context.

 

Examples include:

 

  • Directing an AI agent to compile documents from multiple internal repositories into a consolidated report or briefing.
  • Aggregating large volumes of internal documentation into structured summaries or datasets.
  • Transforming proprietary knowledge bases or technical documentation into simplified formats suitable for external distribution.
  • Generating derivative outputs that remove contextual safeguards such as system dependencies, formatting controls, or embedded metadata.
  • Organizing large collections of files or records into categorized outputs intended for later retrieval or transfer.

 

The defining characteristic of this Sub-section is the delegated consolidation of sensitive information. The subject leverages the AI agent to perform scalable data preparation that increases the volume, portability, or usability of organizational data.

 

While the staged data may not yet have been transferred outside the organization, the consolidation process materially lowers the effort required to exfiltrate or exploit it. In environments where AI platforms possess broad repository visibility, this capability can significantly accelerate the preparation phase of insider activity.

IF028.001AI Agent Internal Reconnaissance

A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.

 

This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.

 

Unlike routine search or manual browsing, AI agent internal reconnaissance enables:

 

  • Cross-repository correlation of fragmented data.
  • Inference generation from distributed signals.
  • Relationship mapping between people, systems, and initiatives.
  • Aggregation of intelligence from platforms the subject does not routinely access.
  • Persistent monitoring for developments related to sensitive topics.

 

In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.

 

Examples include:

 

  • Tasking an AI agent to determine whether a confidential acquisition is underway by correlating calendar entries, procurement tickets, and legal document metadata.
  • Directing an agent to summarize all references to an internal investigation across multiple repositories.
  • Instructing an agent to infer likely restructuring plans based on hiring freezes, budget adjustments, and executive communications.
  • Using an AI platform’s enterprise-wide indexing capability to identify sensitive project names or legal matters outside the subject’s department.

 

The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.

 

The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries.