Dive into practical advice, research findings, and expert perspectives on building security-aware cultures. Explore evidence-based strategies to strengthen your organization's human security posture.
Despite years of simulations and mandatory e-learning, phishing continues to succeed. Why? Because too many organisations treat phishing simulations as a one-off training exercise rather than a behavioural challenge. Clicking “next” on an annual training module doesn’t rewire the habits and decision-making shortcuts that attackers exploit every day.
Read ArticleBeneath the firewalls and encryption layers lies a far older human force: our need to belong. This drive for group identity, which has shaped societies for millennia, now shapes how we behave online. This is where cybersecurity meets anthropology, a lens that helps us understand why people in digital spaces form “cyber tribes” and how these tribal affiliations influence behaviours, risk perception, and even compliance with security practices.
Read ArticleIn the first three blogs of this series, we looked at the foundations of choice architecture, the power of secure defaults, and how UX nudges can guide people toward safer decisions. But here’s the challenge: unless these principles are baked into the way we build technology, they risk becoming afterthoughts, nice-to-have features that get dropped when deadlines bite. That’s why the next step is embedding choice architecture into the Software Development Lifecycle (SDLC) itself.
Read ArticleTraditional threat models focus heavily on technical vectors, malware payloads, privilege escalation, misconfigurations, and lateral movement. These are critical, but they only paint half the picture. The majority of breaches today begin with a human, a click, a disclosure, a misjudgement, or an omission. If we treat people as static, rational elements in the system, our threat models remain incomplete. It’s time to bring behavioural modelling into the heart of threat assessment.
Read ArticleThis article explores how Gen AI can support HCRM, with a focus on intervention design, and provides 10 validated prompts that practitioners can adapt for their organisation’s specific context.
Read ArticleHuman Cyber Risk Management (HCRM) – a discipline that draws on behavioural science to understand why people click, share, trust, or ignore warnings, and how we can shape cultures of secure behaviour. Today, we stand on the edge of something big. Artificial Intelligence is not just another tool in the security stack; it is reshaping the very fabric of how people work, learn, and interact. And with it, the way we must think about human cyber risk.
Read ArticleIn the first two blogs of this series, we explored how choice architecture shapes behaviour and why secure defaults are one of the most powerful tools in security. Now it’s time to move into the world of user experience (UX) and interface design, where the smallest details can have the biggest impact on whether people behave securely… or take risky shortcuts.
Read ArticlePart two of a seven-part series unpacking how the behavioural science concept of choice architecture can be woven into IT architecture, UX/UI, and development lifecycles to nudge, guide, and default users toward secure behaviours – without relying solely on training or policy. Each article will blend behavioural science, secure-by-design principles, and practical application in the technology lifecycle.
Read ArticleThe first of a seven-part series that will unpack how the behavioural science concept of choice architecture can be woven into IT architecture, UX/UI, and development lifecycles to nudge, guide, and default users toward secure behaviours – without relying solely on training or policy. Each article will blend behavioural science, secure-by-design principles, and practical application in the technology lifecycle.
Read ArticleFrom automating processes to generating insights, AI offers unprecedented opportunities. But alongside this opportunity comes a quieter, less technical challenge: AI misuse by humans inside organisations. When we talk about AI risk, the conversation often fixates on model bias, adversarial attacks, or regulatory compliance. Yet many of the most immediate risks don’t come from the technology itself – they come from the way people choose to use it.
Read ArticleYour face. Your voice. Your words – used against you. In the age of AI, deception just became terrifyingly personal.
Read ArticleAs cyber threats become more sophisticated, organisations are coming under increasing pressure to monitor employee activity more closely. From detecting insider threats to preventing data leaks, behaviour monitoring has become a standard security policy within many organisations.
Read Article