A multi-layered interactive model mapping cognitive, behavioural, social, and organisational dimensions of cyber risk and resilience โ applied to human and AI agent behaviour.
Technology alone cannot secure an organisation. Research consistently shows that human behaviour is the decisive factor in the vast majority of security incidents โ from clicking phishing links and reusing passwords to ignoring policy and falling for social engineering. Yet most security programmes still focus overwhelmingly on technical controls, treating people as the problem rather than the solution.
The Behavioural Model for Cyber Security changes that. Developed by CyBehave and grounded in evidence-based behavioural science โ including the COM-B model, the Behaviour Change Wheel, Nudge Theory, and Protection Motivation Theory โ this interactive framework provides a structured, multi-layered approach to understanding why people behave the way they do in security contexts, and how to design interventions that actually change behaviour.
Critically, as autonomous AI agents become embedded in enterprise workflows, the same behavioural questions apply to them. AI agents exhibit functional analogues of cognitive bias, habit formation, authority compliance, and social norm adoption. CyBehave is developing what we are calling Behavioural Convergence Theory (BCT) โ an emerging body of research that investigates whether and how established human behavioural science frameworks can be meaningfully extended to govern AI agent behaviour. This work is ongoing, but the early evidence suggests that human behavioural science provides the most robust existing toolkit for understanding agentic AI risk.
The interactive visualisation below maps 16 behavioural factors across four concentric layers, each representing a different dimension of cyber security behaviour. You can overlay threat vectors to see where attacks exploit behavioural weaknesses, intervention functions to explore evidence-based approaches to behaviour change, and measurement dimensions to assess organisational maturity. The three-lens system lets you examine each factor from a human, AI agent, or convergent perspective.
Each behavioural factor in this model is assessed for its alignment to established Human Cyber Risk Management (HCRM) theories, models, and frameworks โ the body of academic and practitioner research that underpins how we understand human behaviour in security contexts. As part of CyBehave's ongoing research into Behavioural Convergence Theory, we evaluate how directly these HCRM concepts translate to AI agent behaviour. When viewing factors through the AI Agent or Convergent lens, each factor displays an applicability badge indicating the strength of this alignment:
The HCRM concept has a direct, well-evidenced functional equivalent in AI agent behaviour. The underlying mechanism differs, but the observable outcome and security implications are structurally parallel. For example, Risk Perception in humans maps directly to Threat Scoring in AI โ both produce assessments systematically biased by prior exposure.
The HCRM concept requires meaningful adaptation to apply to AI agents. The human framing from established theories does not transfer literally, but a functionally analogous process exists when reinterpreted through an agentic lens. For example, Motivation in humans becomes Objective Functions in AI โ not the same mechanism, but producing equivalent behavioural outcomes when misaligned.
The analogy between the HCRM framework and AI agent behaviour is partial or metaphorical. The concept applies to AI agents only in a loose structural sense, and the human framing must be substantially reinterpreted. These factors represent the frontier of CyBehave's BCT research and require the most careful handling when designing cross-domain interventions.
How individuals perceive, process, and evaluate security risks under cognitive constraints
COM-B framework โ the capability, opportunity, and motivation driving secure actions
Group norms, authority dynamics, and cultural influences that shape collective security behaviour
Policies, nudges, training architecture, and incident response systems that structure behaviour
Explore the interactive model below. Switch between Human, AI Agent, and Convergent lenses to see how behavioural science applies across both human and agentic AI cyber security.
Across all 16 behavioural factors in this model, CyBehave's Behavioural Convergence Theory research evaluates how directly established Human Cyber Risk Management concepts translate to AI agent behaviour.
| Layer | Human Factor | AI Equivalent |
|---|---|---|
| Cognitive | Risk Perception | Threat Scoring |
| Cognitive | Mental Models | Learned Representations |
| Cognitive | Cognitive Load | Context Constraints |
| Behavioural | Capability | Agent Capability |
| Behavioural | Opportunity | Environmental Permissions |
| Behavioural | Habit Formation | Learned Defaults |
| Social | Authority Compliance | Instruction Following |
| Organisational | Policy Architecture | Governance Frameworks |
| Organisational | Nudge Design | Prompt & Default Engineering |
| Organisational | Training & Awareness | Alignment & Fine-tuning |
| Organisational | Incident Response | Kill Switches & Rollback |
| Layer | Human Factor | AI Equivalent |
|---|---|---|
| Cognitive | Decision Fatigue | Inference Degradation |
| Behavioural | Motivation | Objective Functions |
| Social | Social Norms | Emergent Agent Conventions |
| Social | Security Champions | Sentinel Agents |
| Social | Culture & Climate | System-Level Norms |
The Social layer has the highest concentration of adapted factors (3 of 4) โ social dynamics are the hardest to map to AI agents. The Organisational layer is entirely strong analogy, reflecting that governance structures translate most directly to agentic AI systems.