Cybersecurity, like many other fields, is one where it is easy to become overwhelmed by the technical aspects of the challenges involved, and miss one of the most important determinants of outcomes – namely human behaviour.  In the same way that you can provide an army with weapons, logistics, technology etc…  if the individual soldiers in the field are not highly trained and motivated, battles will be short and unlikely to go in your favour.   In fact those terms right there, ‘Highly trained and motivated’  – are the core of a vast and raging argument going on across the entire cybersecurity field at the moment.  One with sides, factions, leaders, battles, betrayals, allegiances etc….  that I will guide you through and contribute a little to during these articles.  In the meantime perhaps, just remember the terms: training and motivation.  There’s a lot in there.

So during my time as research engineer at Data61 of the CSIRO – my boss at the time, the incredible Fang Chen, asked me to take a meeting with the two leaders of the cybersecurity program there to see if there was anything for our team (Interactive Behaviour Analytics) in this particular space.  It was an interesting meeting, not least because I knew absolutely nothing about cybersecurity at the time, but the two program leaders, Danielle Traino and Jodi Steel spent two hours describing the space they worked in, the major challenges, opportunities and the general terrain of the industry and research at the time.  It was fascinating and I thought recognised a number of foundational human behaviour patterns in amongst the challenges they outlined.  I went away, did some reading, got across the ‘usable security’ paradigm and quickly came to the conclusion that this was, indeed, a target-rich environment for our group and there were excellent opportunities for really good science here.  And opportunities to connect this to valuable commercial outcomes for industry as well as the chance to progress the conversation about the role of human beings in the security ecosystem more generally.

Because what I realised, and then was quickly confirmed by the current body of literature around the subject, was that the critically important question of ‘why do users often fail to deploy even very basic security protocols in their work practices?’ is not only a behavioural one, but a cognitive one – and one that a well-established and mature field of psychology has a lot of the answers to – namely Health Psychology.

Health psychology is largely concerned with the question of: ‘What are the major determinants of people engaging in self-protective behaviours?’  Sound familiar?  It should. Because the challenge of convincing the population at large to wear sunscreen when at the beach, or to exercise more, or always wear condoms, is highly analogous to the problems of getting people to use strong and different passwords on their logins, keep their browser plugins up to date and not share login credentials.  Cybersecurity behaviour is in many ways perfectly analogous to health behaviour.  In both cases:

  1. The desired, protective behaviours are onerous and unappealing.
  2. The consequences of poor behaviour are typically sometime in the future.
  3. The consequences are often poorly understood and…
  4. There is the eternal human optimism of ‘it can’t happen to me’. 

So in short – people are, by and large, really bad at engaging in self-protective behaviours.

Health Psychology research really hit it’s stride in the 80s when governments began to deploy large public advertising / information campaigns aimed at addressing common health problems in the population.  In Australia we had mass media campaigns such as ‘Life Be in it’ starring the legendary ‘Norm’ and the enormously successful ‘Slip, Slop Slap’ campaign. 

The scale and ambitiousness of these campaigns meant that significant resources were deployed to gather data, validate approaches, and develop an empirical understanding of what worked and what didn’t in terms of the main challenge: mass behaviour change through messaging.  This resulted in a large number of proposed and competing cognitive models that could be used to predict behaviour that were repeatedly challenged, tested, and incrementally improved by countless researchers over the years, leading us to with a pretty solid understanding of the major dynamics, and challenges of this particular problem-space.

And in recent years – the Cybersecurity space has caught up to these and began to deploy these particular cognitive models – that not only predict behaviour – but allow you to craft your interventions to best change it.  We’ll talk more about these models in another post soon.

So back to Data61.  I was asked to set up a team and begin work with Daniella Traino spruiking our research capacity to interested commercial entities in the hope of securing that holy grail within the CSIRO – a joint public/private research project that generated both good science for us and valuable commercial outcomes for our partners in the form of actionable insights into their security challenges.  Daniella and I took a host of meetings, developed our value proposition, listened to industry concerns, evaluated organisational maturity in the field and eventually secured a major, multi-stage contract with a major commercial bank (I can’t reveal who it is for obvious reasons – so let’s just call them ‘the bank’). 

So armed with a host of existing research into health psychology, a handful of cutting edge papers applying this to the cybersecurity domain, some really smart people around me, and great institutional support from Data61 – our little team set out to explore a brave new world – the behavioural aspects (and cognition) of Cybersecurity.

More to follow.