In the first 4 parts of "Fixing Data Breaches", I highlighted education, data ownership and minimisation, the ease of disclosure and bug bounties as ways of addressing the problem. It was inevitable that we'd eventually end up talking about penalties though because the fact remains that although all the aforementioned recommendations make perfect sense, we're still faced with data breaches day in and day out from companies just not getting the message.
This part of the series is also the hardest to implement. It requires regulatory changes, can be highly subjective and poses all sorts of cross-border challenges. But it's important, so let me do my best articulating it.
Are Organisations Actually Paying Attention?
Here's what really strikes me with the current situation and the first time I remember making this observation was after the VTech data breach. This is an incident where 4.8 million customers who'd bought kids tablets, registered on VTech services and then entered their children's personal data (name, gender and date of birth) had that data taken by someone who found a simple vulnerability in VTech's things. That was in November 2015, a mere 3 months after the Ashley Madison data breach. To this day, that breach remains one of the most serious, damaging and extensively reported security incidents we've ever seen, which caused me to ask this question:
Are companies not watching the news? Is the CEO not saying "hey, I hear hacking is a thing, how well prepared are we?" How is it that people aren't being more proactive? We're bombarded with hacking stories all over the news every week, nobody can claim ignorance.
And that's what has struck with me ever since that time and I've asked the question over and over again as more big names have tumbled. I don't need to list them here because you know them - you know that this is a serious problem we keep seeing and without doubt, it will continue unabated for the foreseeable future.
When we see something as trivial as the flaws in VTech (and so many other organisations), it's hard not to conclude that there's simply not enough motivation for them to do better. We need to change that equation, and to do that we need to adjust the present economics of having a data breach.
Changing the ROI of Security via Stiffer Penalties
One thing I hear a lot is that it just doesn't make financial sense for organisations to invest more than the bare minimum in security. The argument goes that there's an insufficient ROI for a variety of reasons.
Firstly, selling security can be hard. On so many occasions in my prior corporate life, we'd attempt to invest more in the security spend and the challenge always came back to asking for immediate term, cold hard dollars and return, maybe we wouldn't get hacked. Or it wouldn't be as bad. It was hard to say because it's an intangible potential future eventuality versus a real world hit to the budget. Then, to make things worse, organisational incentive schemes are designed to reward individuals for money saved in the immediate term with no consideration as to what the long-term consequences of those cost savings may be.
Secondly, people will continually point to the share price of organisations that have suffered a data breach and observe that they've taken no real hit. I've found that the reality is a lot more nuanced than that (and I've spoken before about clearly identifiable short-term dips), but the very fact there's a perception that security incidents are inconsequential is worrying. Certainly, it's not as black and white a data breach wiping off millions from the share price.
And finally, there's the sting of regulators or, as it tends to be, a lack of sting. TalkTalk in the UK is a perfect example; here we have a company with revenues of £1.8 billion per year and their (highly public) incident a couple of years ago led to a fine of... £400k. That's 0.02% of revenue so to put it in more relatable numbers, imagine you earn $100k per year and you got a fine of $20. That's not disincentive to do it again, that's lunch.
Per the headline above, that was actually a record fine for the ICO in the UK too. (Keep in mind also that their incident was due to a SQL injection vulnerability that was exploited by a child - it was an egregiously bad oversight on their behalf.) What message does this send?
This has to change.
What Stiffer Penalties Might Looks Like
In part 2, I wrote quite a bit about the European GDPR and the impact on personal data. I want to come back to GDPR again here as one of the headlines that's really made people pay attention is the potential for fines of up to 4% of gross annual worldwide revenue. In other words, that £400k TalkTalk fine could potential blow out to £72m. Whoa.
I don't want to lean too heavily on GDPR here as we're yet to see it hit and see how hard those penalties bite. They won't all be 4% either - that's a maximum - and we may well see many infringements result in mere warnings. The point is that a construct that provides scope for fines that really hit the bottom line is important.
Consider what the threat of serious penalties does to the ROI on security spend; it changes the equation such that there are stronger incentives not to be breached.
Of course, this still all depends on organisations acknowledging that they may 1-day become the victim of a breach themselves and consequently, must be more proactive in not having one in the first place. It also requires regulatory changes and even then, as with GDPR, there'd need to be acknowledgement that there are degrees of severity with these incidents.
Consider data breach scenarios where penalties would differ wildly; Disqus disclosed an incident that impacted 17 million people in October. However, we're talking email addresses and passwords and a one day turnaround for breach disclosure. That, to me, seems like the sort of case where monetary penalties aren't appropriate. Compare that to the Equifax breach with many times more data of a much more sensitive nature and then terrible handling of the incident and I'd happily argue that the big stick should be brought out penalty wise.
Summary
It almost feels like a sour note to end the series on where despite all the more proactive things I've written about in the previous 4 parts, some organisations still just won't get it. But what the penalties will do is drive the behaviours discussed in those previous posts. Companies wanting to avoid fines will invest more in education, reduce the amount of data they hold (see how it's now a liability?) and encourage disclosure whether that be by simply being more approachable or running bug bounties.
Penalties should never be the goal - we don't want companies to have to dip into their pockets - but in a paradoxical way, we need them so that companies do their utmost to avoid them. These 5 steps together are how I'd fix data breaches.