A CISO’s guide to: Evidencing behavioural change03 May 2018
“Measures maketh the man” – something, surely, only the board could cobble together.
It is enough to make the CISO tasked with evidencing change cry in her beer.
But if you don’t have a number, what do you report to the board?
Giving them evidence of real returns on your information security spending is vital if you are ever going to convince them that your team is more than just a cost sink.
When dealing with as tricky a quality as behavioural change, it is small wonder the time-pressed CISO reaches for the easy-to-pick metrics: training completions, intranet page hits or phishing simulation click-throughs.
Pick up a security awareness report and the chances are it will be peppered with such numbers. All useful in their own way, but far too often they miss the mark. After all, the goal of your security awareness campaign is to change behaviour.
So how do the usual suspects stack up against these two simple criteria? And what are the alternatives? Here’s a quick guide to what is true evidence of behavioural change. And what’s simply fodder for the boardroom shredder.
The logic behind increasing your training uptake is persuasive. If users have taken acourse and passed an assessment, they have undoubtedly gained a measure of security knowledge. The problem is proving that knowledge gained becomes behaviour changed. A poorly designed CBT – and an overly target-driven culture – can encourage people to the system. The result? High pass rates without any genuine change in behaviour.
Intranet page hits
Training may expose people to a brief, intense burst of information, but even better would be a rich repository of engaging security content, easily accessible to everyone in your organisation Which is what your intranet should be. But how often do eyes light up when talking about the company intranet pages? Tie yourself to such absolute metrics as page hits and you’ll be hostage to the vagaries of intranet credibility.
This has to be a win-win. You’re testing actual behaviour. You’re getting a plethora of actionable data. And you are gifting those who fall foul and click-through with a genuine ‘teachable moment’. The problem is repeatability. If you run the same phishing test again and again, your audience will become all too familiar with it. Change the test, and any change in the click-through rate could simply be down to the change in the framing, rather than improved behaviour.
Incident response rates
Your incident report metrics flow directly from actual user behaviour, which is great. But the killer metric to fashion from such raw incident numbers is the trend towards higher quality reports. Improved quality means your response team can be more efficient in their actions. And you don’t have to win the argument on exactly why more incidents is a good thing.
Behavioural change surveys
This is an approach to metrics that does what it says on the tin. Pick a behavioural model, design a survey, find a space in the busy communications timeline and watch the metrics flood in. Lots of rich data can be obtained this way, but there are things to watch out for.
Ideally, you’ll want to baseline behaviour and attitudes before any change campaign gets going. Then, every survey re-run will gain you real evidence to support that campaign. You’ll also need to find ways to engage with the audience to avoid survey fatigue. And don’t forget to supplement the survey results with qualitative insight from focus groups. That way you’ll spot potential biases in the self-reported behaviours and attitudes of the survey.
The above metrics only scratch the surface of the evidential base for change. And of course, no single metrics approach can be a one-size-fits-all solution. Mix and match to suit your audience and the narrative.
And don’t forget the power of the measurement itself to effect change. Manage that measurement process intelligently and you can squeeze out an extra ounce or two of changed behaviour. Don’t believe me? Just ask.