It was an innocent mistake, with huge consequences. Robin Seggelmann was a programmer working on OpenSSL, a software library used to make secure connections over the internet. In 2014, it emerged that a tiny error of his – likened to misspelling “Mississippi”, and all but invisible in 400,000-odd lines of code – had allowed the world’s hackers into the servers of Google, Amazon, Facebook, Tumblr and more, exposing sensitive personal data including credit card numbers and passwords.
It’s not Seggelmann’s fault; more just one of many indictments of our slapdash approach to computer security. It took two years before anyone noticed the bug, dubbed Heartbleed. By then, it was affecting pretty much every server in the world. The only solution was to patch the software, and hope for the best.
That’s not good enough. This year, the World Economic Forum listed cyberattacks and data fraud in its top five most likely global risks, alongside extreme weather, natural disasters and the failure to tackle climate change. It estimated that cyberattacks will cause $8 trillion of losses in the coming five years. As critical infrastructure becomes more and more interconnected – potentially even to your kettle and toaster if the “internet of things” ever truly becomes reality – we create more points of vulnerability that can be exploited.
It is time to call time on the era of the digital sticking plaster that is the software patch. More effective ways of protecting a networked society exist at the level of computer hardware (see “Uncrackable computer chips stop malicious bugs attacking your computer”). They would require an overhaul of the way we do computing, and are unlikely to be a panacea. But the price of the innocent mistakes allowed by our current software-based way of security is simply too high.