Insider threats are security incidents caused by people who already have legitimate access. Most are negligent rather than malicious: a finance employee falls for BEC, a developer pushes credentials to a public repo, a departing salesperson copies the customer list. Truly malicious insiders are rarer but more damaging when they happen. Modern insider risk programmes balance detection with trust, recognising that surveillance theatre destroys culture without preventing the incidents that matter.
What it is
An insider threat is a security incident in which the actor has, or recently had, legitimate access to the organisation's systems or data. The CERT Insider Threat Center divides them into three categories:
Malicious insiders. Employees, contractors, or partners who deliberately misuse their access. Motivations include financial gain, revenge against the organisation, ideological conviction, or recruitment by an external party (criminal, competitor, or nation-state).
Negligent insiders. People who cause incidents through carelessness, poor security hygiene, or falling for social engineering. They had no malicious intent. Their behaviour created the opening anyway.
Compromised insiders. Legitimate users whose accounts have been taken over by an external attacker. From a detection perspective, the activity looks like an insider threat because the credentials and sessions are real. The actor is not.
The boundaries blur in practice. A negligent insider whose credentials end up in an attacker's hands becomes a compromised insider. A compromised insider whose credentials are used by the attacker can produce activity indistinguishable from a malicious insider.
The Ponemon Institute's annual cost of insider threats report consistently puts negligent insiders at roughly 60 percent of all incidents, malicious insiders at around 25 percent, and compromised insiders at the rest. Malicious incidents cost more per occurrence, but negligent incidents happen more often and produce most of the cumulative damage.
Why it matters
Insider threats matter for reasons that perimeter-focused security misses:
The attacker is already inside. Network segmentation, perimeter firewalls, and external attack surface management do not stop someone with a legitimate VPN account and a corporate laptop. The defender's tools have to recognise that the bad actor looks like a normal user.
The signal-to-noise ratio is brutal. Every employee accesses systems, downloads files, and emails colleagues every day. The malicious activity hides in legitimate activity at the same level. Detection has to find anomalies in volumes of normal behaviour.
The blast radius depends on the role. A junior employee in marketing has limited access. A database administrator has the keys to the data. A cloud admin can spin up infrastructure that bills the company millions if abused. Risk concentrates in roles, not in individuals.
Third-party insiders are often the gap. Contractors, consultants, MSPs, and offshore support staff frequently have similar access to employees with weaker oversight. Several major breaches have come through this channel.
The recent record makes the case in detail:
- Coinbase (May 2025). Overseas customer support contractors were bribed or socially engineered into providing customer data, including names, addresses, government IDs, and account details. The attackers used the data to phish high-net-worth account holders. Coinbase set aside $180 to $400 million for remediation.
- Tesla (2018). An employee who had been passed over for promotion modified internal manufacturing software, exfiltrated gigabytes of confidential data, and shared it with third parties. The case became one of the most-cited examples of malicious insider activity in tech.
- Cash App (2022). A former employee accessed customer data after departure (the access had not been revoked) and exfiltrated information on more than eight million customers. Block disclosed the breach the following year.
- Twitter (2022). A former Saudi national working at Twitter was convicted of using his access to spy on dissidents on behalf of the Saudi government. The case raised the spectre of state-recruited insiders at major platforms.
- Anthem (2017). A subcontractor exfiltrated 18,000 member records over two months. The data went to a personal email account.
These cases share a pattern. The insider had legitimate access for a legitimate reason. The misuse was difficult to detect in real time. The damage was done before the organisation noticed.
How attackers exploit it
The mechanics differ depending on the insider type.
For malicious insiders, common scenarios include:
- A departing employee copies customer lists, source code, or strategic documents in the weeks before departure. The intent might be a head start at a competitor, a business they plan to start, or revenge.
- A finance or HR employee abuses access to extract data for personal gain, identity theft, or sale.
- An IT or security staff member uses privileged access to cover up activity, modify logs, or grant themselves access to sensitive systems for sale or use later.
- An employee recruited by an external party (criminal organisation, competitor, nation-state) exfiltrates data or sabotages systems on their behalf. Recruitment increasingly happens through LinkedIn or Telegram outreach with offers of significant payment.
For negligent insiders, the scenarios are familiar:
- A developer pushes credentials, API keys, or secrets to a public GitHub repository.
- A finance employee falls for a BEC pretext and wires money to an attacker.
- A laptop is left unlocked, lost, or stolen with sensitive data on it.
- An employee uses an unsanctioned cloud service ("shadow IT") to share files with a partner, creating an exposure the security team has no visibility into.
- A privileged user falls for phishing, hands over MFA, and provides the attacker with the keys to the kingdom.
For compromised insiders, the activity often looks like one of the malicious patterns above, but with an external attacker driving it. Stealer log credentials, AitM phishing, and MFA bypass attacks all produce compromised insiders.
How to detect it
Insider threat detection blends technical signals with behavioural and contextual ones.
Technical signals worth monitoring:
- Anomalous data access patterns. A user suddenly downloading large volumes of data they have not historically touched. A new pattern of access to systems outside the user's normal scope.
- Unusual data movement. Email attachments to personal accounts, uploads to consumer cloud storage, USB writes when policy normally forbids them, large transfers to unfamiliar destinations.
- Off-hours access. Logins at 3 AM by an employee who normally works 9-to-5. Sustained activity during vacation periods.
- Access changes near departure. A departing employee who suddenly accesses systems they have not touched in years. Access reviews triggered by departure announcements catch some of this.
- Shared credentials and account abuse. Multiple geographic locations on the same account. Service account credentials used interactively. Password manager exports right before departure.
- Code repository activity. Pushes to personal repos of code or data that should stay internal. Cloning of repos beyond the user's normal projects.
Behavioural and process signals worth incorporating:
- HR signals. A pending performance management process. A failed promotion. A resignation notice. A grievance escalation. These are sensitive and have to be handled carefully, but they correlate with elevated insider risk.
- Manager observations. Behavioural changes, expressions of grievance, conversations about competitors. The line between healthy management awareness and surveillance theatre is real and worth respecting.
- Access review findings. Routine recertification surfaces accounts that have access they should not, including dormant accounts and over-privileged active ones.
Data Loss Prevention (DLP) tools play a role but have well-known limits. They generate enormous volumes of low-fidelity alerts, miss anything that flows through encrypted channels they cannot inspect, and fail entirely against users who simply photograph their screen. Treating DLP as the primary insider control sets a low ceiling on detection quality.
User and Entity Behaviour Analytics (UEBA) products attempt to baseline normal behaviour and flag deviations. They work better than DLP for some scenarios but require months of tuning and produce many false positives early on.
How to remediate
When an insider incident is confirmed:
- Preserve evidence. Do not tip off the actor by changing access patterns abruptly. Coordinate with HR, legal, and security to preserve logs, devices, and any artifacts that may be needed in a later investigation or prosecution.
- Contain access. Disable accounts, revoke sessions, retrieve devices. Where the case may go to law enforcement, follow the established forensic protocol.
- Identify the scope of access. What systems did the actor reach? What data did they touch? What did they exfiltrate, and where did it go?
- Notify legal counsel and HR early. Insider cases involve employment law, criminal law, and regulatory reporting obligations that vary by jurisdiction.
- Engage law enforcement if appropriate. Theft of trade secrets, financial fraud, and unauthorised access to computer systems are criminal offences in most jurisdictions. Recovery prospects are often better with law enforcement involvement.
- Notify regulators and affected parties as required. GDPR, CCPA, and sector-specific rules impose disclosure obligations. The clock typically starts on discovery.
- Conduct a root-cause review. What controls failed? Where was the gap? Update access policies, monitoring rules, and processes based on the lesson.
Best practices
- Implement least privilege rigorously. Most insider damage is amplified by access the user did not actually need. Periodic access recertification, role-based access control, and just-in-time elevation reduce the blast radius.
- Pay particular attention to departing employees. The two-week notice period is when the largest fraction of malicious data theft happens. Heightened monitoring (with disclosure, not in secret), revocation processes that complete on the last day, and documented offboarding catch most of it.
- Apply the same scrutiny to contractors and third parties. A contractor account with administrative access is the same risk as an employee account with administrative access, often with weaker oversight. Several major incidents have come through this gap.
- Build access reviews into the workflow. Quarterly recertification by managers, with default revocation when not confirmed, catches accumulated entitlements that nobody bothers to remove otherwise.
- Separate duties for high-impact actions. Production deployments, large financial transactions, mass data exports, and similar should require two-person approval. A single insider cannot then act alone.
- Make reporting psychologically safe. Many malicious insider cases were preceded by warning signs that colleagues observed but did not report, often out of loyalty or uncertainty. A reporting channel that is easy to use, confidential, and not punitive shifts the dynamic.
- Limit the use of shared accounts. Shared admin accounts make insider attribution impossible. Each privileged action should map to a specific human.
- Monitor for stolen data on the outside. Even with strong internal controls, exfiltration sometimes happens. Watching dark web markets, paste sites, and code repositories for your data turns up incidents that internal controls missed.
On surveillance and trust
The strongest temptation in insider risk programmes is to monitor everyone all the time, more aggressively the higher the perceived risk. This produces several predictable problems.
First, the false positive rate is enormous. Most anomalies have benign explanations.
Second, employees who feel surveilled disengage and trust the organisation less, which is itself associated with elevated insider risk.
Third, monitoring without legal and HR alignment creates evidence that is hard to use in any subsequent action.
Fourth, the most damaging insiders often know enough to evade the monitoring.
The programmes that work focus on high-risk roles, high-risk transitions, and high-impact actions, with disclosure that monitoring exists. They invest in role-based access controls, separation of duties, and access reviews that reduce damage even when detection fails. And they rely on culture, fair management, and easy reporting as much as on tools.
Insider threat is a people problem. The technology helps. It does not replace the rest.
ScruteX monitors dark web markets and code repositories for stolen data and credentials, surfacing insider exfiltration before regulatory disclosure deadlines hit.
Learn moreFurther reading
Privileged Access Management (PAM)
Why privileged accounts sit at the centre of every ransomware kill chain, what PAM platforms actually do, and the gaps PAM does not close.
IAM Basics
The fundamental concepts of Identity and Access Management, the difference between authentication and authorisation, and the common weaknesses that turn IAM into the most-targeted layer in modern attacks.