4 min Security

Why are simple applications more vulnerable than complex ones?

Why are simple applications more vulnerable than complex ones?

A surprise find: as it turns out, simpler applications are more likely to contain critical vulnerabilities. How can this be the case?

This insight comes from research conducted by Black Duck. Applications in the financial sector seem particularly vulnerable, with researchers noting that organizations often underestimate the importance of securing smaller websites and simple applications.

These applications frequently suffer from basic security flaws, such as inadequate or missing Transport Layer Protection. This stands in sharp contrast to larger, more complex sites and applications that not only have fewer serious vulnerabilities but also get patched more quickly.

For context, Black Duck uses its own classification system to determine application complexity. Applications are considered simple if they have minimal interactivity and a short crawl tree (the path the crawler takes through all URLs), while dynamic content and interactive elements increase complexity.

The troubled security landscape

Black Duck examined nineteen different industries and 1,300 applications, uncovering 96,917 vulnerabilities. One striking observation is that across most industries, bugs in smaller applications tend to persist much longer. For instance, in the education sector, 72 critical vulnerabilities found in simple systems took an average of 342 days to fix. In contrast, 69 critical bugs in larger sites were typically resolved within just one day. The situation can be even more concerning in some sectors – utilities, for example, left vulnerabilities in medium-sized apps and sites unpatched for an average of 876 days, while large projects were secured within a day.

The research also revealed significant variation in vulnerability types. Misconfigurations (36,321) and cryptographic flaws (30,726) were the most common issues discovered by Black Duck. While they also examined broken access management, insecure design, and outdated components, these first two categories accounted for roughly two-thirds of all vulnerabilities found.

What now?

Unsurprisingly, Black Duck, as an application security specialist, advocates for enhanced application security measures. Specifically, they recommend combining DAST (Dynamic Application Security Testing), SAST (Static Application Security Testing), and SCA (Software Composition Analysis). DAST identifies runtime issues, SAST catches code-level errors, and SCA detects known vulnerabilities. The company particularly endorses pairing these approaches – for example, using SAST+DAST to identify cross-site scripting and SQL injection risks, or combining SCA+SAST to detect vulnerabilities before code deployment.

While these techniques are effective, the core issue appears to be stemming from a misconception. Complex applications receive the most attention and quickest security responses. Developers naturally focus on managing said complexity, particularly when they’re new to an organization or project and need guidance with larger codebases. Bigger apps are needed for multifaceted tasks as well as serving customers on the frontend. However, smaller applications seem to fall through the cracks.

It’s unrealistic to address every vulnerability immediately. The stark contrast between secure, complex applications and vulnerable, simple ones suggests a prioritization issue. The challenge lies in finding a practical balance, especially since, as Black Duck notes, some sectors have limited time and resources to delve into cybersecurity. Additionally, simple applications are often deployed for specific, temporary use cases, while complex applications tend to serve multiple purposes and receive regular updates, reducing the likelihood of lingering vulnerabilities.

Monoliths versus distributed solutions

David Heinemeier Hansson, Ruby on Rails creator, offers a relevant insight that may at first seem tangential: avoid distributing your resources. Small organizations shouldn’t try to operate like large ones, and this applies to developers that do not have the time or budget to spend on various apps. When resources are limited, integrated, monolithic solutions are preferable to distributed ones. This principle applies to individual applications and to maintaining multiple sites/apps with a small team. It’s particularly crucial in critical sectors, where a single vulnerability can lead to serious compromise. As Black Duck points out, consequences like HIPAA violations in the U.S. healthcare industry provide ample motivation to maintain proper cyber hygiene proactively.

Managing numerous small applications appears as impractical as unnecessarily distributing a single application’s components. Peripheral codebases can cause prolonged security issues. When problems do arise, especially in critical sectors, it may seem absurd to outsiders that a small, seemingly insignificant site caused such significant issues – but in light of these findings, it’s entirely understandable.

Also read: When is a critical vulnerability actually dangerous?