We get asked this question a lot, usually by someone who already has a cybersecurity program but suspects they have a blind spot. They are usually right.
Insider risk is the potential for harm caused by people who have legitimate access to an organization’s systems, data, facilities, or people. Employees, contractors, partners, service providers. The older term, “insider threat,” frames this as catching bad actors. Insider risk is broader.
It includes the employee who clicks a phishing link, the contractor who misconfigures a cloud share, the departing engineer who copies a folder they probably should not have, and the malicious insider who deliberately steals data. Most programs focus almost entirely on the last category and miss the first three, which is where the majority of damage actually comes from.
The three categories
Insider risk generally breaks down into three types:

Malicious insiders act with intent. Data theft, fraud, sabotage, espionage. These are the cases that make headlines. They are also the minority. In our experience, and consistent with industry data, malicious insiders account for roughly a quarter of all insider incidents.
Negligent insiders are the largest category, responsible for over half of incidents. These are people who make mistakes, take shortcuts, or bypass controls because the secure path is harder than the insecure one. Shadow IT, misconfigured permissions, sensitive documents emailed to the wrong recipient. No malice, but real damage.
Compromised insiders are legitimate accounts hijacked by external actors through credential theft, social engineering, or supply chain compromise. The person did not cause the risk, but their access did. This category is growing fast. The 2025 Verizon DBIR reported third-party involvement in breaches doubling year-over-year.
Each category needs different prevention strategies, different detection logic, and different response protocols. A program designed only for one will miss the other two.
How insider risk develops: the Critical Pathway
People do not wake up one morning and decide to become insider threats. There is a process. The most widely referenced model for understanding this is the Critical Pathway to Insider Risk (CPIR), developed by Eric Shaw and Laura Sellers.
The CPIR describes a progression through several stages:
Personal predispositions are individual traits and history that create vulnerability. These might include patterns of rule-breaking, difficulty managing stress, poor social skills, or a history of conflict at work. Predispositions alone do not make someone an insider risk, but they lower the threshold for what happens next.
Stressors are the triggering events. A bad performance review, a passed-over promotion, financial pressure, personal crisis. Stressors interact with predispositions. The same event that one person absorbs without issue can push someone with existing vulnerabilities toward a concerning trajectory.

Concerning behaviors are the observable signals that someone is moving along the pathway. Increased policy violations, conflicts with colleagues, withdrawal, expressions of grievance, unusual working hours, access to systems outside their normal pattern. These behaviors rarely appear in isolation. They tend to escalate.
The crime script is the final stage: planning, preparation, and execution of the harmful act. Reconnaissance on security controls, testing access boundaries, staging data for exfiltration. By this point, intervention is urgent but still possible.
What makes the CPIR useful is not the individual stages (most are intuitive) but the recognition that insider risk is a process, not an event. That process happens over time and creates multiple opportunities for intervention before harm occurs.
Organizational conditions matter as much as individual ones. Shaw’s model also accounts for organizational predispositions: weak policy enforcement, cultures where people are reluctant to report concerns, and what some researchers call the “trust trap,” where excessive trust in employees leads to reduced oversight. These organizational factors do not cause insider risk directly, but they create the environment where it can develop unchecked.
What this means for managing insider risk
If insider risk is a process with multiple stages, then managing it is not about building a wall. It is about creating intervention points across the entire pathway.
Start with personnel security. Robust vetting and background checks proportionate to role sensitivity catch predispositions before they enter the organization. Not surveillance. Due diligence.
Stressors are harder. Manager training to recognize behavioral changes, employee assistance programs that people actually use, and JML (joiner-mover-leaver) processes that account for sensitive transitions like demotions, restructurings, and offboarding. Most organizations handle onboarding well and offboarding poorly. The space in between barely gets attention.
Concerning behaviors require a speak-up culture where colleagues can report concerns without fear of retaliation, behavioral analytics that detect pattern changes (not just policy violations), and multidisciplinary triage that brings together security, HR, legal, and management. Treating every signal as a pure security event is a mistake we see often.
Then there are the technical controls that limit what a single insider can do: least privilege, data loss prevention, privileged access management, detection rules tuned to insider scenarios specifically, and investigation capabilities that preserve evidence and legal defensibility.
The key insight is that no single control covers the full pathway. Technical monitoring will not catch personal predispositions. HR programs will not detect data staging. Legal frameworks will not prevent phishing clicks. Insider risk management requires a cross-functional approach that connects governance, human factors, technical controls, legal constraints, and operational readiness.
The European dimension

For organizations operating in the EU, there is an additional layer. NIS2 now mandates insider risk measures for covered entities. DORA extends similar requirements across financial services. But GDPR constrains how monitoring can be implemented, the EU AI Act classifies behavioral analytics in employment as high-risk, and Belgium’s Private Investigations Act requires government licensing for certain internal investigation activities.
These regulations pull in different directions. Building a program that satisfies all of them simultaneously requires thinking about legal defensibility from the start, not as an afterthought.
Where to start
The question we ask every client first is not “what tools do you have?” It is “Who owns insider risk in your organization?” If the answer takes more than five seconds, that is the gap.
Want to discuss how this applies to your organisation?
