Detections
- Home
- - Detections
- -DT113
- ID: DT113
- Created: 28th April 2025
- Updated: 28th April 2025
- Contributor: The ITM Team
Tracking Patterns of Policy Violations
Monitor and analyze minor policy violations over time to detect emerging behavioral patterns that may indicate boundary testing, behavioural drift, or preparation for more serious misconduct. Isolated minor infringements may appear benign, but repeated or clustered incidents can signal a developing threat trajectory.
Detection Methods
- Maintain centralized logging of all recorded policy violations, including low-severity infractions, within case management, HR, or security systems.
- Implement analytical tools or workflows that flag individuals with multiple minor violations within defined timeframes (e.g., repeated unauthorized device use, bypassing security protocols, small unauthorized disclosures).
- Correlate minor violation data with other risk indicators such as unauthorized access attempts, changes in behavioral baselines, or indicators of disgruntlement.
- Analyze patterns across teams, units, or operational areas to detect systemic issues or cultural tolerance of rule-breaking behaviors.
- Conduct periodic behavioral risk reviews that explicitly include minor infractions as part of insider threat monitoring programs.
Indicators
- Subjects accumulating multiple low-level infractions without corresponding corrective action or behavioral improvement.
- Increased frequency or severity of minor violations over time, suggesting desensitization or emboldenment.
- Violations spanning multiple domains (e.g., IT security, operational protocols, HR policy), indicating generalized disregard for rules.
- Evidence that minor violations are clustered around operational pressures, major organizational changes, or periods of reduced oversight.
Sections
| ID | Name | Description |
|---|---|---|
| MT022 | Boundary Testing | The subject deliberately pushes or tests organizational policies, rules, or controls to assess tolerance levels, detect oversight gaps, or gain a sense of impunity. While initial actions may appear minor or exploratory, boundary testing serves as a psychological and operational precursor to more serious misconduct.
Characteristics
Example ScenarioA subject repeatedly circumvents minor IT security controls (e.g., bypassing content filters, using personal devices against policy) without immediate consequences. Encouraged by the lack of enforcement, the subject later undertakes unauthorized data transfers, rationalizing the behavior based on perceived inefficiencies and low risk of detection. |
| PR029 | Persistent Access via Bots | The subject exploits their technical role to deploy or manipulate automated bots within the organization’s environment—most commonly within collaboration platforms (e.g., Slack, Teams, Discord) or internal operational systems (e.g., Jira, ServiceNow, Helpdesk tooling). These bots are designed to persist beyond the subject’s tenure, leveraging independent service credentials (or other credentials not specifically associated to a user), webhook integrations, or unattended workflows to maintain covert access.
The subject may create new bots under the guise of legitimate productivity enhancements, or hijack existing integrations to expand data access, redirect output, or embed hidden monitoring functionality. Once active, these bots operate continuously, harvesting internal conversations, extracting files, or polling sensitive endpoints—often without triggering standard audit alerts tied to user accounts.
Because automation accounts are rarely subject to the same identity governance or offboarding scrutiny as human users, this technique enables long-term persistence, broad data visibility, and operational concealment, facilitating continued access or covert surveillance after the subject’s departure. |
| MT015.001 | Opportunism | The subject exploits circumstances for personal gain, convenience, or advantage, often without premeditation or major malicious intent. Opportunistic acts typically arise from perceived gaps in oversight, immediate personal needs, or desires, rather than long-term ideological, financial, or revenge-driven motivations.
Characteristics
Example ScenarioSenior enlisted personnel on a U.S. Navy warship collaborated to procure and install unauthorized satellite internet equipment (Starlink) to improve their onboard quality of life. Acting without command approval, they circumvented Navy IT security protocols, introducing significant operational security (OPSEC) risks. Their motive was personal convenience rather than espionage, sabotage, or financial gain. |
| IF027.005 | Destructive Malware Deployment | The subject deploys destructive malware; software designed to irreversibly damage systems, erase data, or disrupt operational availability. Unlike ransomware, which encrypts files to extort payment, destructive malware is deployed with the explicit intent to delete, corrupt, or disable systems and assets without recovery. Its objective is disruption or sabotage, not necessarily for direct financial gain.
This behavior may include:
Insiders may deploy destructive malware as an act of retaliation (e.g. prior to departure), sabotage (e.g. to disrupt an investigation or competitor), or under coercion. Detonation may be manual or scheduled, and in some cases the malware is disguised as routine tooling to delay detection.
Destructive deployment is high-severity and often coincides with forensic tampering or precursor access based infringements (e.g. file enumeration or backup deletion). |
| AF029.002 | Unauthorized VPN Usage | The subject deliberately uses Virtual Private Network (VPN) technology in a manner that circumvents organizational oversight, masking the nature, destination, or content of network activity. This includes installing unapproved VPN clients, as well as reconfiguring sanctioned VPN software to route traffic through unauthorized exit nodes, personal infrastructure, or third-party services not governed by corporate policy.
By diverting traffic away from monitored pathways, the subject obstructs standard telemetry collection - evading logging of session destinations, data transfers, or identity-bound usage. This behavior frustrates forensic reconstruction, hinders real-time monitoring, and degrades the reliability of investigative artifacts. Unauthorized VPN usage is an intentional anti-forensics measure aimed at concealing potentially harmful activity behind layers of encrypted and unsanctioned transit. |
| PR035.001 | AI Agent Data Staging | A subject prepares for potential insider activity by directing an artificial intelligence (AI) agent to aggregate, organize, or transform sensitive organizational data into structured or portable formats.
This behavior occurs when an AI agent is tasked with systematically collecting information from internal repositories and consolidating it into outputs that are easier to store, review, transfer, or exploit. The agent performs bulk summarization, data normalization, or cross-repository aggregation that significantly reduces the effort required to later misuse the information.
Unlike reconnaissance activities that focus on discovering intelligence, AI Agent Data Staging focuses on operational preparation of data. The AI agent converts dispersed or complex internal information into consolidated outputs that increase its portability, usability, or accessibility outside its original context.
Examples include:
The defining characteristic of this Sub-section is the delegated consolidation of sensitive information. The subject leverages the AI agent to perform scalable data preparation that increases the volume, portability, or usability of organizational data.
While the staged data may not yet have been transferred outside the organization, the consolidation process materially lowers the effort required to exfiltrate or exploit it. In environments where AI platforms possess broad repository visibility, this capability can significantly accelerate the preparation phase of insider activity. |
| IF028.001 | AI Agent Internal Reconnaissance | A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.
This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.
Unlike routine search or manual browsing, AI agent internal reconnaissance enables:
In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.
Examples include:
The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.
The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries. |