6 min Applications

How to survive an end-of-life software experience

How to survive an end-of-life software experience

Redis Enterprise 7.2 comes to its official end of life in February 2026, so what should users do in this situation and what lessons can they take away for the end-of-life management experiences that they will inevitably experience with other platform and tools?

Redis is good, but when a version update drives users into an alleyway, what should they do? As an open source, in-memory data store known for its ability to act as a distributed cache, message broker and database, Redis is lauded for its high-performance, low-latency read/write speeds achieved through memory data storage. Come February next year, Redis software application developers, data science professionals and other connected operations staff will need to have been doing some prudent planning.

This should therefore be a reason to start planning ahead on its own. However, the open source version of Redis does not have security patches available officially for version 7.2. There are avenues here so that users can attempt to apply the changes themselves (if they have the skills and wherewithal to do so), but Redis OSS is unquestionably at ‘end of life’.

Excuse me, can I see your license?

“A key factor in the whole change management for Redis (or indeed any technology) is licensing,” explains Martin Visser, Valkey technology lead at Percona. “Redis the company has changed the software license for Redis the software. Redis 7.2 was the last version available under the BSD 3-clause license, which allows users to deploy and use Redis for what they see fit. Redis 7.4 was launched under a new Redis Community license and under the Server Side Public License, or SSPL. Neither of these licenses is on the list approved by the Open Source Initiative, as they prevent specific use cases, which is against the Open Source Directive guidance that the software can be used by any user for any purpose. Redis then changed its approach again and adopted the GNU Affero license for version 8.0.”

In practice, says Visser, this has meant that a lot of those using Redis are on version 7.2. But why do some users end up sticking with one particular version of any piece of software for an extended period of time? The answer is (almost always) stability and reliability of core functionality… and of course because the license of the software in question is one that a developer or data scientist’s company is happy for them to use.

Do nothing, don’t panic

“So what are the options? The first is … do nothing. Redis 7.2 continues to be a solid option for in-memory data caching and as a distributed database. If it is not broken, then why make that change? However, the reality is that the lack of security updates after this date may force people to update. While it might be possible to mitigate security risks in the future, the reality is that making a change on your timetable is better than having to migrate in the face of a security scare.”

Visser says the second option is to upgrade to a later version of Redis. Even if the team has to juggle with license changes, the central mission has to remain focused on working software. The message is: open source is (hugely) important, but a pragmatic approach to system architecture is even more vital.

“Moving to Valkey version 7.2 should be a ‘lift and drop’ exercise as the code bases are compatible. The Valkey team has committed to supporting its version of 7.2 until April 2027, providing a longer path for planning ahead if that is what users require. Furthermore, the main use case for Redis and Valkey is around in-memory data caching and this functionality is agnostic of the version that you are using. So, while you might initially opt for Valkey 7.2, you can also move to later versions like Valkey 8.0 with minimal impact, while getting all the internal improvements for free,” said Visser.

Five pillars of migration wisdom

To plan ahead around a potential migration, Visser uses the Redis predicament to detail five key steps:

  1. Audit the environment: Find our how many instances are in place and where they are deployed. Spoiler alert, it’s usually more than any one member of the team might estimate (often because it includes instances for testing, resilience or data backup) so be careful.
  2. Check deployments and extensions. When multiple extension options that can augment or add functionality alongside a core installation (that might make it easier and faster to complete deployments) are in place, they need to be accounted for.
  3. Plan the move. Form a timing plan for that shift, all in one shift are risky. Look at how the team can limit potential impact on performance through using downtime windows and staggering your migrations.
  4. Make the move. Enter the physical migration stage from one system to another, but first take a snapshot of current instances, then pre-load and rack all the necessary instances with that snapshot to pre-populate the new instances. 
  5. Monitor performance. Users can take observability and system data before and after the move.

As always, developer documentation will be key, but this should also include aspects such as software deprecation policies and amortisation plans so that the finance function knows when a particular part of system architecture can be written off and that, crucially, users can anticipate and prepare for changes. 

Data scientists should think about and assess the impact of retiring models, datasets, or APIs on any other dependent workflows; they should then strategise for transitions by providing reproducibility options and backwards-compatible outputs. Teams will need to handle the shutdown of data pipelines, ensuring that any dependent systems are safely transitioned. Finally, for now, let’s also note that end-of-life planning requires the careful archiving of models, datasets and logs to ensure long-term reproducibility and compliance. 

Next… end-of-life as-a-service? (EOLaaS) 

In our current climate of automation-everything and AI-driven acceleration, we might envisage a time (quite soon) when we start to talk about end-of-life as-a-service (EOLaaS) coming to the fore. Could this be so?

As we stand in 2025, it feels like there are too many moving pieces in the jigsaw puzzle of image and instance installation to hand the whole job over to AI. For now, we can take some (or all) of the advice presented here and think about effective planning and clear communication at the human-to-human level as initial imperatives.