This is the trope which just won’t die. Despite some great work from people like EmmaW and the Socio-Technical Group at the NCSC, the ISACA and Angela Sasse’s team at UCL, the list of vendors and commentators trotting out this rubbish is long and distinguished.
There have always been dissenting voices who try and add nuance to the argument and point out of the complexity of human computer interaction and how that feeds into cyber security risk, but sales and marketing goes brrrr and out come these low-value hype pieces.
Which humans get blamed
The fundamental problem with the statement that “humans are the weakest link in cyber security” is that it’s actually true, but not in the way the typical writer thinks. Clearly in cyber security computers cannot bear any responsibility for security failures, because they’re computers, and cannot be held responsible for anything.
What this statement normally means is “the person who clicked on the phishing link” or “the person who opened the attachment” is the weakest link, thereby holding that person accountable for something they are not responsible for and have no authority over.
Even when the definition of ‘people’ expands, it still tends to be limited to the cyber security staff, systems administrators and developers, who as a group all have one thing in common, which is no authority over (and therefore accountability for) decisions about cyber security funding and resources. Scratch your average techie and you’ll often find a disgruntled renegade despairing of management’s inability to prioritise risk over profit.
Which humans deserve the blame
What ‘humans’ should mean in this context is;
- “the executive who failed to apply appropriate due diligence pre-acquisition”, or
- “the manager who de-prioritised developer skills”, or
- “the board who wouldn’t fund third party code review or penetration testing”
.. because those people do have authority, are responsible and absolutely should be held accountable. I suspect the reason the finger isn’t pointed in this direction in vendor and industry press pages is the desire not to criticise the goose in case they stop laying golden eggs (or buying enterprise security products riddled with bad code).
Contemporary examples of this are not difficult to find, which in itself says something really depressing about the technology industry. The most recent high profile event is a chronic case which I’ll unpick, but here are a couple more from just the last six months in case you’re bored;
- The almost identical attack in February by the same group on another file transfer product (Fortra’s ironically named ‘GoAnywhere’)
- The exploitation of another input validation vulnerability in Barracuda in June by suspected Chinese espionage groups
- The Mobile Irony vulnerability from last week, which led to an attack on the Norwegian government.
It might strike you (it has me) that these vulnerabilities are not in open source or low-cost products, but in expensive enterprise security products, the very products you might expect to have had a higher level of diligence applied. I think it’s fair to conclude that there is not a very high level of diligence applied, which begs the question why you’d pay so much money for these products. Anyway, onto the case study, MOVEit.
Progress Software MOVEit.
On May 31st Progress Software published an alert with patches for a vulnerability later tagged as CVE-2023-34362. The vulnerability was what’s called an ‘SQL injection‘ vulnerability, caused by a failure to do something called ‘input validation‘. If that terms means nothing to you, think of it as taking food from a stranger and putting it in your mouth without even checking what it is.
This is a basic programming error (it’s covered in the BBC Bitesize GCSE Computer Science guide, which is high school Computer Science for non-UK readers). However, to focus on the programmer is to miss the bigger picture, which is that this was more than one mistake. It was;
- A failure by the programmer who wrote those lines of code
- A failure by the employer to ensure the developer had the requisite skill and understanding (through recruitment or training).
- A failure by the employer to implement effective peer review (where someone else checks it before it’s tested or put into production).
- A failure by the employer to conduct a human or automated security code review before it went into production, or at any point after that.
- A failure by the employer to conduct penetration testing to discovered vulnerabilities before putting the code into production.
- A failure by the employer to implement effective cyber security risk management (which is the overarching cause of 1-5).
Notice how many of those mistakes were one individual worker, and how many are the employer, which means ultimately the senior management team and the board of executives.
Progress Software Customers (and their customers ad infinitum)
What made this whole scenario such a car crash is how this vulnerability led to a supply chain security incident and ultimately a massive data breach affecting millions of people’s personal data, because of the way Progress Software’s customers managed their security.
I’m not absolving Progress Software of their responsibility to ship products without basic coding errors, but this is file transfer software, used to get data from one place to another.
It should not, EVER, have been used to store personal or sensitive personal data for any period longer needed to make that transfer, and depending on the sensitivity of the data it should have been encrypted in a way independent of the software itself.
I’m not talking with 20/20 hindsight here, I’m thinking back to implementing a file transfer service between a University and a Hospital in 2015. We encrypted the data before it was uploaded and implemented a retention rule which meant data would be destroyed after an agreed period.
An attacker compromising this service in the same way the Cl0p ransomware gang compromised a MOVEit server would have only been able to access a limited number of files at most 72 hours old, encrypted with a minimum of 256bit AES. This was not an expensive APT/XDR/AI/insert-your-own-buzzword-here control, it was using low cost or open source software and simple design principles.
What I find disappointing (not surprising, not any more) is that so many huge organisations apparently failed to implement or ensure their suppliers implemented such basic security controls.
Ernst & Young and PricewaterhouseCoopers are the two which spring to mind. Huge internationals with $40b+ revenue who had unencrypted data stolen from a file transfer server. These are companies with their own cyber security divisions!
It’s hard to break the responsibilities down as I did the MOVEit vulnerability, because I don’t know if the EY and PWC data was stolen from a server they operated, or a supplier operated on their behalf (or a supplier of a supplier). However supply chain security risk has been a thing for years, and ensuring your entire chain has effective cyber security controls in place is hard, but multi-billion dollar companies should be able to get right.
What it’s important to remember in all this is the victims, whose personal or sensitive personal data has been released, which could be used for extortion, identity theft or fraud. They didn’t cause this problem, but they will feel it’s consequences. There’s no sign that the incredibly highly paid executives who are ultimately responsible for these failures will feel any consequences at all. That’s capitalism folks.