5 min

Storage needs a makeover. Often thought of as back-office donkey/grunt work, we all recognise the need and importance of storage, but not many of us want to focus on it as a computing discipline in and of itself. Which is wrong, obviously.

Notwithstanding the fact that one Australian tech news portal used to run its storage section under the headline banner ‘Snorage’ (that’s Aussie humour for you, you either love it or you hate it), today we’re at a point where data-driven everything doesn’t exist without a storage backbone.

Could computational storage be the saviour that helps elevate this oft-unloved technology forward to vie with the bright lights that shine on the developer zone every day?

VP of marketing at ScaleFlux JB Baker thinks so. His firm develops advanced System-on-a-Chip (SoC) devices and software that connect storage, memory and compute to accelerate applications and optimize infrastructure resources for the datacenter, enterprise and at the edge.

Computational storage (CS) has been discussed for years, but it has yet to reach high levels of mainstream adoption. However, suggests Baker, in the near future, most large enterprises will likely embrace this technology with open arms.

How to ‘evolve’ a workload

“As enterprises continue to generate more data, they’re increasingly faced with the reality that ‘scale-out’ is not a long-term sustainable strategy. Datacenter space is at a premium and every node added triggers cascading costs and architecture challenges. The fixed size of today’s storage devices is already creating problems. It has become increasingly complicated and expensive to evolve workloads: as they grow or change, the only option is often to add additional drives,” asserted Baker.

Fair enough then, but if more drives don’t fit in the server, additional servers must be added – and if that’s not possible, everything gets much more expensive, especially in a world where edge devices are generating massive amounts of data. Did we mention spiralling datacentre power and cooling costs? 

Right, there’s a definite challenge here, so what do we do?

Baker reminds us that as we understand it, commodity flash storage has just about been pushed to its limit. ScaleFlux notes that it has worked intimately with commodity storage that alleviates many of these issues by offloading the server’s Central Processing Unit (CPU) so that enterprises can get more functionality from their servers, including better endurance and performance, higher capacity and cost savings. 

“But, here is now a real choice available between commodity solid-state-drives (SSDs) and SSDs with computational storage technology, packaged as native NVMe (non-volatile memory express) compatible drives that need no special software, app configurations, drivers, or things-that-are-drivers-but-not-called-drivers,” said Baker

Magical analyst house Gartner acknowledged the growing importance of computational storage adoption in its Hype Cycle for Storage and Data Protection Technologies. According to the report, more than 40 percent of enterprise storage will be deployed at the edge in the next few years. By 2026, large enterprises will triple their unstructured data capacity stored as file or object storage on-premises, at the edge, or in the public cloud.

Why hold back?

With edge use cases becoming more common and many organisations planning to increase their storage capacity exponentially, it begs the question: why haven’t all enterprises already adopted computational storage?

“Simply put, the technology has put off many organisations because they were led to believe that it’s ultra-complicated to implement and use. The industry, including players in the computational storage space, has historically focused on highlighting the programmability aspect of computational storage versus its more practical, easy-to-understand benefits, such as enhanced performance, endurance and capacity,” stated Baker, in realistic but still positively upbeat tones.

While it’s true that computational storage drives are programmable, the reality is that most users don’t want (or have the technical know-how) to tinker with the firmware on their SSDs. Essentially, the industry took the next generation of drive technology and inadvertently made it seem scary and intimidating. 

“The result has been a snowball effect,” said Baker. “What I mean is, once vendors started over-focusing on programmability, analysts started highlighting it and some in the media started writing about it. A kind of false narrative emerged which suggested that computational storage was an ‘exotic’ technology, unsuitable for your typical enterprise IT person who wants to purchase drives, install them within a system and move on to their next task,” insisted Baker.

Plug-and-play computational storage?

But, despite all this discussion, clarification and analysis so far, we need to remember a key truth i.e. there are computational storage drives designed to do precisely that. They can be plugged into a system just like an NMVe drive or any other type of SSD and no unique configurations or drivers needed. The ScaleFlux team are quick to point out the fact that with plug-and-play computational storage drives, enterprises can immediately begin reaping the benefits associated with offloading the server CPU, including up to four times more capacity. 

Since onboard processing allows for transparent, in-line compression, host-based compression can be turned off to free up the CPU for higher-priority tasks. Computational storage also enables enterprises to have a smaller footprint; less power and cooling requirements equal more work-per-watt and work-per-second.

“Counterintuitively, compression performed in hardware becomes a performance accelerator as the effects of write-amplification are mitigated when less data is read and written. Performance is enhanced because, by writing less, you’re opening up more read cycles. In real-world environments, this gives enterprises higher performance, increased input/output operations per second (IOPS), and lower latency so that they can exceed application service level agreements (SLAs). This same effect also improves the drives’ endurance (lifespan),” clarified Baker.

He says that when it comes down to it, when drives are built both with native NVMe support and computational storage features, you get options for essentially just better versions of the SSD. Crucially then, that’s what people want – something that gets the job done and is simple to use. 

Reeducation of nations

“A sort of re-education for the industry around computational storage needs to happen before enterprises become overwhelmed with data and failing SSDs in the coming years. People need to know that there are mainstream options available, not highly technical, programmable science projects outside the scope of your everyday IT person,” concluded ScaleFlux’s Baker.

The suggestion here is that, yes, computational storage drives are programmable, but they also offer rather more. If we follow the creed that Baker and the ScaleFlux team are extolling here, we may come to think of these devices as a comparatively easy-to-use solution that lets any enterprise do significantly more with less when it comes to storage, a need that will continue to become more critical as time goes on.

Is data storage sexier and are you more awake (no snorage, remember?) to the whole topic now? We thought so.