The fire at the NorthC data center in the Dutch town of Almere has raged all day. Disruptions were reported at the country’s Chamber of Commerce, a nearby public transport operator, various GP clinics, among many others. Utrecht University will even remain closed on Friday due to the consequences of the data center outage. How could a single NorthC location serve as a single point of failure for these organizations? Do we simply have to conclude that cyber resilience has its limits? And what will be the consequences of this already infamous fire?
First, the good news: the physical evacuation of the data center site went according to plan. There were no casualties from the fire, presumably apart from the balance sheet of NorthC itself and its affected customers. It also appears that no toxic substances were released. While the fire is under control, it has not yet been extinguished at the time of writing.
The NorthC story
A brief reconstruction of what we know so far about the fire itself. On the edge of the town of Almere, next to the A6 highway, a fire broke out around 8:45 a.m. local time on Thursday morning at the northwest compartment of the NorthC data center. The fire appears to have been limited to the back side of the location where, in NorthC’s words, “technical facilities” are located. NorthC is launching an investigation into the exact cause, but we already know some details about what is located at or at least very near to the fire site. Assistance from Lelystad Airport and Schiphol was required to preventively cool a diesel tank. This tank is normally used only as an emergency supply and is physically separated from the data halls that contain the IT infrastructure itself.
Part of the power supply is on fire, again, as we write this. The main IT equipment is, as is the norm, located in the main building, with a strict separation between the servers and supporting hardware. The fire department ordered NorthC to cut off all power. This shows that even multiple backups for the power supply are no guarantee that a limited fire will have equally limited consequences – one must always comply with fire marshals, of course, who will take whatever precaution to contain the hazards.
Above all, we hope that NorthC will be able to determine the cause of the fire once it has been extinguished. For data center operators worldwide, an incident like this is always instructive both for internal operations as well as colleagues. Consider the water leak in 2023 that caused a fire in a French Google Cloud data center—incidentally, due to the failure of hardware not managed by the hyperscaler itself, but still within the facility. Unexpected problems piled up at the time, such as a shortage of water within the facility to extinguish the fire, in addition to software issues and the resulting poor or lack of availability for Google Cloud customers.
Certification Limitations
Back to Almere. Anyone who takes a look at the certifications and standards NorthC adheres to at this location would quickly have been impressed or at least put at ease. ISO 27001, 9001, 14001, and 22301 are accompanied by, among other things, biometric access control, video surveillance, and 24/7 monitoring by NorthC. The real-world test shows that none of these measures prevent outages. Without speaking of a formal “violation” of the aforementioned certifications and standards, we can conclude that the outage jeopardized business continuity for various parties. That’s not to say this is NorthC’s fault specifically.
A clearly defined compartment appears to have been affected, but appearances can be deceiving. It is possible that the damage extends further than what is visible from the outside. Furthermore, availability was “only” partially lost due to the fire. Nevertheless, reality shows that critical systems did indeed fail completely at NorthC customers, with the fire as the apparent direct or indirect cause. Note: in such incidents, outages may appear to be related to a problem elsewhere, without this necessarily being the case. Yet the problems are so unusual and widespread that there are almost certainly causal links with NorthC for many of them.
The first lesson from the NorthC fire is already clear: do not assume that signing a contract with an ISO-certified party guarantees business continuity. Even an act of God—as we’ll tentatively characterize the fire for now, pending the forensic investigation—is no excuse. A data center can go down for any reason whatsoever, whether it’s located in the Middle East, an earthquake zone, or in the Dutch province of Flevoland.
Countless ‘minor disasters’
NorthC itself is currently working overtime. A quick count on their own site reveals 27 data center locations, 14 of which are in the Netherlands. Where possible, these other locations will handle additional workloads; some customers use multiple facilities, often with the specific goal of not being dependent on a single location. This very incident will prove that decision to have been worthwhile, most likely. But anyone running IT infrastructure with another operator or in-house is not automatically spared the consequences of the fire.
The impact on customers is as varied as it is unpleasantly surprising. The most striking example is Utrecht University (UU). It will keep its doors shut on Friday; the university libraries will also remain closed on Saturday and Sunday. The network, applications, and website are all experiencing issues, according to the Dutch newspaper NRC. More striking—or rather, more shocking—is the complete failure of the university’s access cards, which rendered offices and study areas inaccessible. At certain locations, even the accessible restrooms were closed. The UU does imply some redundancy in its communications to staff. It discourages UU staff from making “unnecessary” repeated login attempts, because “this way we help limit the pressure on the systems as much as possible while recovery efforts are underway.”
The university was far from the only victim. Although Transdev, the public transit operator in the province of Utrecht, has since resolved the issues, it faced limited communication between bus drivers and the dispatch center. In emergencies, drivers could still call the emergency number. Additionally, the emergency button in the buses did not work (!), which reveals that at least one critical system can simply fail due to a problem with a single supplier.
Sometimes it is not entirely clear whether the NorthC fire caused a reported outage. For example, the Chamber of Commerce (KvK) stated that “a malfunction in the data center” interrupted service, but later removed that sentence. Other affected parties included the Central Bureau of Statistics (CBS), medical billing company Infomedics, and members of SURF.
There is certainly a counterexample. The FlevoHospital in Almere proudly reported that the NorthC fire did not hinder patient care: “The FlevoHospital deliberately works with multiple data centers. This means there are no consequences for patient care.” So it can be done this way, too.
Yet another wake-up call
Outages are not drills. Once they reach a certain scale, they almost always expose shortcomings that remain hidden even with extensive red teaming and backup tests. Moreover, even the most notorious incidents aren’t quite what they seem. Consider the fact that security firm CrowdStrike updated its solution in a rather fragile and failure-prone manner for 13 years and largely suffered no issues whatsoever, before its spectacular failure on July 19, 2024. Other examples include highly public threat actors that target a major company in exactly the same way that was already known worldwide (ShinyHunters had already claimed hundreds of victims since September 2025 via a Salesforce leak until they stole millions of personal records from Dutch telco Odido this year).
The point is that theory and knowledge offer no guarantees for real-world incidents, which are by their very nature unpredictable. Had NorthC anticipated that the fire department would require a complete shutdown of the power supply for a fire in a single compartment? Did NorthC’s customers know that their own systems could be taken down via this single point of failure? Were their backup systems inadequate, or did it take too long to implement them?
In any case, the NorthC fire is yet another wake-up call for organizations that any kind of dependency for mission-critical workloads is disastrous. Among the affected customers will be parties who knew that an outage like this could occur and accepted that risk. Such a concession may simply be a practical necessity. But we’d wager that many of those affected did not foresee the exact consequences, or that only certain technical staff had anticipated this (and perhaps warned their upper management about it?).
We do not know the specific examples from the inside, but we know that CIOs, CISOs, security personnel, IT support, and other staff often see similar incidents coming from a mile away. Whether you’ve been affected or not, knowledge of the technical workings of your own systems is required right up to the highest corporate level. Simply having a basic overview of dependencies and assessing the associated risks may be sufficient for a CEO or IT decision-maker. Those who lack this knowledge must seek it out, whether through internal or external sources. If they fail to do so, fate has a habit of intervening.