5 min

Serverless computing promises great things for businesses and developers. Whereas the rise of the cloud made the running of on-prem computing optional, a serverless approach offers the chance to never have to maintain servers again on top of that. However, some issues tend to throw a spanner in the works, making serverless in many cases less suitable for an application than it seems.

The logic behind serverless is relatively simple. Those who build applications would prefer not to have to deal with the added necessity of server maintenance. In fact, it builds on the advantages that cloud services offer over on-premises. Migrating to the cloud has given many organizations the opportunity to place datacenter resources outside of their own control, without all the maintenance costs associated with it. However, serverless goes one step further, as cloud services still require customers to organize the infrastructure and backend. Cloud offerings also don’t scale all the way down to zero in the same way. This is because the use of serverless services is on a pay-per-use basis, which in theory allows for effortless scalability. It’s accessed through so-called Function as a Service (FaaS) platforms, including AWS Lambda, Google Cloud Functions and Microsoft Azure Functions.

In essence, the serverless approach takes away as many headaches as possible for IT personnel and lets application developers focus on what they do really well: developing applications. Serverless has therefore been praised for years for its supposed low cost and the time savings it is said to make possible. This is because developers can link existing platforms and services together without having to program all kinds of features and functionalities themselves. They only need to focus on so-called “glue code” to make things work together from a high level, making the developed solution based on as little programming code as possible. As Stedi CEO Zack Kanter notes, this is a big advantage because extra code slows down the development process in the long run (“code is debt”). Therefore, he says, serverless adoption yields the highest “development velocity”. This means that there is no other way software can be developed as quickly to bring added business value. However, developers must initially carefully map out what their application will look like, after which an “aggressive adoption of off-the-shelf platforms” is necessary, according to Kanter.

In addition, serverless is said solve many existing problems. After all, the responsibility for maintaining the infrastructure lies elsewhere. Other services can additionally be implemented without too much effort. For example, various microservices may integrate a fully-functional payment system, update inventory data and integrate all kinds of other functionalities. Another advantage is that the failure of one of these services need not be a disaster. This is because the design of serverless systems assumes modular, independent microservices, unlike monolithic applications where a single failure can undermine their entire operation.

There’s always a “but”

This will all sound highly appealing to developers. A serverless approach simply sounds like the natural next step after cloud adoption, where servers also no longer need to be managed by the end user. All the benefits of a mature, engineered server infrastructure without the associated costs and worries.

In March of this year, however, Amazon Prime Video proved how this theory may fail to convert to a practical reality. Its team had designed a serverless tool to monitor the quality of video and audio, with independent components that could each scale separately under heavier traffic. However, they ran into a hard limit, as some components were found to cause bottlenecks at only 5 percent of the expected real-life load. This would’ve come as a surprise, as the whole idea behind serverless is that scalability is inherent in its design. Because this turned out to be a wash, all the required building blocks were simply too expensive to maintain.

The Prime Video team concluded that the distributed nature of the tool had no actual advantage. After all, all components had to be working for it to function, making a major asset of the serverless approach irrelevant to this use case. Instead, when all components were combined into a monolithic tool, the design was roughly the same. The bottom line showed that the benefit of the monolith was remarkably substantial, as the shift had provided a cost savings of 90 percent (!).

High costs after all

A common complaint about serverless is that it would work primarily on a large scale, and is actually relatively expensive on a small scale. Prime Video’s example disproves that representation, as unexpected barriers may exist when the approach is put into practice. Other drawbacks are a lot more general. For example, the loss of control over server hardware also creates a lack of choice, and performance can be unstable due to a lack of control over it. Anyone with an inefficient code runtime may also incur relatively high costs compared to running this programming code with on-prem hardware or via a cloud service while managing their own servers.

Also, the pay-per-use model is not only positive. An inefficient application can require an unpredictable (and therefore potentially huge) amount of resources, which can cause costs to skyrocket. With a different approach, the required infrastructure must already be planned for and provides a hard limit on the operation of an application. So the advantage of having to plan as little as possible comes with the disadvantage of not knowing what a larger scale may entail financially. That can cost a software maker dearly.

When is it useful?

That a serverless architecture can be useful in specific scenarios is shown by an example from Red Hat. Its Principal Product Manager Naina Singh explains how financial services can leverage serverless with a great deal of success. The flexible scale of serverless allows banks to ensure that their services move in lockstep with customer demand. Fintech companies can also turn to serverless to use it for highly variable workloads, Singh suggests. She cites calculating financial risk and product pricing as specific use cases. It shows that the technology can certainly excel, albeit under certain conditions.

Software architect Ben Morris touches on more general areas where serverless makes the most sense to use. For example, it works well for a compact application that should always be available and won’t get too much traffic long-term. When a short-term traffic spike does occur, the serverless architecture can accommodate it. The big advantage of the pay-per-use basis in this scenario is that it “scales to zero,” Morris said.

In conclusion, serverless suits specific situations well, but is far from a silver bullet. While this approach should support scalability, reality proves that it can still encounter unexpected bottlenecks and unimaginable costs. Until those issues are resolved, serverless won’t take over as the dominant approach for constructing apps around.

Also read: X slashes its bills with on-prem adoption: is a wider ‘cloud exit’ imminent?