We have known how to design safe IT systems since the 1970s. Academics have written down the principles, policymakers have nodded in agreement, and the industry has promised to do better. And yet, computers today are no more secure than they were fifty years ago. In fact, the vulnerabilities are piling up. The chance of an organization being hit by a cyber incident remains astonishingly high at one in eight.
Why is it so difficult to incorporate security into the design from the outset? That question has preoccupied Bibi van den Berg, Professor of Cybersecurity Governance at the Dutch Leiden University. Together with her colleague Christina Del Real, Assistant Professor in Cyber Crisis at Leiden University’s Institute of Security and Global Affairs, she delved into the archives. She presented her findings during her keynote speech at the ONE Conference in The Hague: a history of misguided incentives, conflicting interests, and a Silicon Valley culture that prays at the altar of speed.
Back to the origins
“We said: let’s just read everything that has ever been written about security by design,” says Van den Berg. It turned into an archaeological quest through decades of literature. The goal was to find the origins of security by design, that vague and unclear concept everyone talks about but no one seems to be able to define.
And that’s not surprising. There have always been multiple conversations taking place about security by design, siloed into different worlds. Academics had their own story. The industry had theirs. Policymakers had yet another. Often, they didn’t even talk clearly, let alone with each other. But lessons did emerge from all those conversations.
The academics: crystal clear since 1970
In the early 1970s, when computer networks were just emerging, it was a no-brainer for scientists. Compare it to building a house: you put locks on the doors from the start, don’t you? Computer scientists immediately realized that if you connect computers to a network, and if you store vital information on them and share it with other computers via networks, then of course it has to be secure, the professor explains.
They formulated simple principles. Keep your software as simple as possible – after all, every line of code can contain a vulnerability. Give users access only to what they really need, not to the entire system.
In the 1980s, they went further. They developed formal models to prove when software is actually secure. In the 1990s, rules for secure software engineering followed. “And then they realized: it’s not just about code. It’s also about people. How do they use software? What do they need? How are they wired? User-centered security was born,” says Van den Berg. So the knowledge was there early on, but what did that mean in practice over the past fifty years?
The industry: security as a revenue model
“In the 1970s, the industry did absolutely nothing with security by design. It wasn’t until the 1980s that companies started to get nervous. They were selling products with holes in them, and that was starting to become noticeable.”
Their solution was brilliant—at least from a commercial perspective. “Oh, we can fix that, but by selling add-ons that solve security problems,” Van den Berg describes the reasoning. Virus scanners, firewalls, endpoint protection: it was all sold as a solution to problems that could have been prevented from the outset. And so an entire business model emerged around repairing insecure software.
At the turn of the century, Microsoft did something remarkable. Bill Gates wrote a pamphlet in which he stated the IT industry ought to be ashamed. Inherent security had to be fixed, Microsoft’s founder stated. Microsoft ended up developing what it called “trustworthy computing” and a security development lifecycle. “And yes, it is somewhat ironic that this came from Microsoft of all companies,” Van den Berg acknowledges.
Policymakers: slowly waking up
Policymakers also started to take action. In the 1980s, the US military developed the Orange Book: a standard that described what computer systems had to comply with. The reason for its existence was crystal clear, as sensitive data was being stored and shared via computer networks at an ever-increasing rate. Something had to be done.
But around 2000, policymakers realized that too little was being accomplished in this area. Academics had the knowledge, industry had the resources, but in practice, little progress was made. “Academics became a little frustrated and looked at industry and scientists and asked: where are you all? Why is nothing happening?” says Van den Berg.
After that, policymakers did what they could. They started incorporating security by design into their strategies. The first EU cybersecurity strategy at the beginning of this century already stated that products must be secure from the outset, not repaired afterwards. Security could no longer be an afterthought, as thinking ahead was prioritized. This all sounds logical. But until 2013, this sentiment remained limited to mere mentions in policy documents.
Later, the tone changed. Governments finally switched to regulation. Van den Berg says legislators at last aimed at enshrining security by design in legislation, telling everyone in the industry to do better. The way things had been was no longer considered acceptable. The industry would have to meet certain standards. Period.
Three reasons why it doesn’t work
Okay, so academics have known this for fifty years. The industry has been working on developing its implementation since the beginning of this century. Policymakers have been turning its concepts into legislation since 2013. And yet, the question remains: why is everything still so very broken?
Van den Berg cites three factors. One was the culture of Silicon Valley, encapsulated by “move fast and break things”. The sole aim was to innovate, to race to market, to be the first. “It takes a lot of time and effort to make things secure, to really think through how you can make things secure,” explains Van den Berg. You don’t have that time when speed is the motto. Security by design and to move fast as well? These notions are diametrically opposed.
Point two: security-as-an-add-on is worth money. Many parties earn good money from solving problems after the fact. That’s just the reality, says Van den Berg. And that reality makes it difficult to rise above it.
Three: it’s complicated. Really complicated. “Everything in cyberspace is interconnected, and this reality is complicated because it’s geopolitical, because it’s global, because there are different legislative requirements in different countries,” she sums up. “It’s a huge mess.”
But—and this is crucial—that doesn’t mean we have to accept it.
Simplifying is the trick
Van den Berg believes there is room for improvement, by looking at the bigger picture. This involves more than just taking a good look at the technology, instead requiring a critical view of human behavior. At how people work. At what they need. Behavioral sciences and legal science offer valuable insights in this regard.
Take simplicity, which was already one of the core principles in the 1970s. The aim was to write as few lines of code as possible. But simplicity goes further. It’s also about design. “Most people want to see a straight line when they use software. Something simple that just works,” says Van den Berg.
In the process, we end up getting software that crammed with features. “We constantly give them a mess full of bells and whistles, all kinds of extra options. And all those options offer opportunities to make mistakes.”
The reason for all this unnecessary functionality is what research by the University of Twente calls ‘I-methodology’. Designers are often experts who see themselves as average users. “Let’s add these bells and whistles, because if I were an end user, I would find that great and very necessary,” as Van den Berg describes the fallacy.
Take macros in Word. “Why do we need macros in Word?” Van den Berg wonders. “There may just be one user worldwide who finds macros important. That’s a super user. And the rest of us don’t find macros relevant at all. And it’s a wide open security hole. Get rid of it. Turn it off.”
Her advice, thus, is to scrap features, to make apps simpler, and to choose apps over platforms.
Platforms are a disaster
Nowadays, everyone longs for platforms, and vendors offer them as a result. One place for your email, your calendar, your tasks, your entire digital life. Microsoft does it, Google does it. Super convenient, right?
No, says Van den Berg. She says that having all your data in one place means you end up having a single point of failure. If that platform gets hacked, everything is gone. What’s more, platforms are not as user-friendly as they seem. “You click yourself silly,” says Van den Berg. “It’s maddening to be on such a platform all day and then have to click away from your calendar to check your email and vice versa. It drives me crazy. This is not how it should be done.”
Apps are better, she argues. “Apps are islands. An app usually has one clear function. If that app is well designed—and that means few features—it’s user-friendly.” As an example, Van den Berg cites Signal, the encrypted messaging app. “Signal is not for profit. Signal is not on the stock market. It’s not part of something bigger. They don’t want to be bought out. And they deliver one thing. And the one thing they deliver, they do very well.” That’s what you want, she argues: focus, quality, no unnecessary bells and whistles.
In addition, apps offer natural guardrails. Van den Berg asked the audience at the ONE Conference a question: how many people have ever sent an email via their banking app? One person raised their hand. The rest had never even considered it. That’s how our brains work: you just don’t send an email through this particular medium. Instead, you use your banking app for financial matters: checking your balance, making payments, consulting your mortgage. Not for emailing. That line of thinking protects you.
From benign defaults to techno-regulation
Apps alone are not enough, as one also has to think about how to help users within those apps. In recent years, there’s been a lot of talk about benign defaults. These are default settings that protect users. This ideal is often linked to the assumption that people are stupid. Van den Berg does not agree. “I don’t like that argument. I think most people are just busy, distracted, not necessarily interested in security, and want to get things done. And security sometimes gets in the way.”
So you have to help them. With guardrails, with nudges, with a push in the right direction. An example: a toggle switch that is off by default, so that people don’t get into trouble. Advanced users can turn it on if they want to.
But Van den Berg goes a step further. She calls it techno-regulation. “You can’t send an email with a banking app. That’s not possible. We haven’t added that feature.” There’s no switch you can flip to enable it, it’s impossible. Because if there’s a toggle, there will always be people who turn everything on—even if they don’t need those advanced features at all.
The point here is straight-forward: don’t build such a feature into your app. “End users aren’t stupid, they’re not the weakest link, they’re not the problem,” Van den Berg emphasizes. “They’re just trying to get their work done, and we need to help them do that instead of hindering them.”
A matter of will
These principles—simplicity, apps instead of platforms, techno-regulation—are not rocket science. Above all, they require a different design philosophy. A philosophy that puts the user at the center instead of the technology. That incorporates security from the outset instead of adding it as an afterthought.
Fifty years ago, we already knew how to do it. Fifty years later, we’re still not there. The question is: do we accept that, or not? “We truly believe that we cannot fall into the trap of accepting that this is the way it is,” says Van den Berg. “We have to do better.”
That’s the crux of the matter. This is a difficult request, yes. The incentives are wrong, yes. The culture is resistant, absolutely. But the alternative – continuing to muddle through with insecure systems, with add-ons that have to solve problems that shouldn’t exist – is not an option.
Security by design is not a utopia. The principles have been around for fifty years. It is now up to us to finally apply them.
Also read: Google: new code increasingly written in ‘memory safe’ languages like Rust