About 20 years ago I remember sitting in a room with a Unisys systems engineer listening to him explain the principles of Utility Computing and what needed to be done to bring those ideas to reality. At the time, he thought that the IT industry was about 10 years away from being able to deliver that vision. One of the key requirements was that developers should be able to develop an application without any reference to the hardware needed to run it and that the application needn’t run all the time.
Fast forward to today and substitute Cloud for Utility. Serverless computing appears to offer that ability to develop applications without having to provision servers for them, and which need only run when a specific service calls for it. This reinforces one of those old IT industry sayings that, any new technology takes twice as long to reach the mainstream as was initially thought, but then its functionality and application goes twice as far as anyone envisaged.
Xebia Labs 2017 predictions forecast that serverless computing would, as they put it, move beyond being an interesting talking point this year. It is also clear that the way in which the Cloud has developed and is being used, has expanded our idea of the role of IT well beyond the simple “plug-in-the-wall” idea that was Utility Computing.
Despite its name, serverless computing doesn’t mean you don’t need servers. The applications developed with micro-services and APIs, fire up requests for server resources based on specific events. The idea is that deployment is completely automated, leaving it up to an IaaS or Cloud Service provider to manage the provision of compute capacity. AWS Lambda, IBM OpenWhisk and Google’s Cloud Function API are all examples of commercially available serverless computing services. This releases the developer to focus on writing code and the DevOps team to focus on automated deployment, making the goal of repeatable code/test/deploy cycles a reality.
From a business perspective, one of the key attractions of serverless computing is the ability to only pay for what you use, so your investment in infrastructure doesn’t sit idle waiting for a peak load. This makes great sense for applications, like image processing where specific actions trigger resource intense processing, perhaps to change or amalgamate images.
IoT, where sensor feeds trigger analysis, is another area where serverless computing is expected to deliver tangible cost benefits.
Just as there are some applications that will work well in a serverless environment, there are others that won’t. The very fact that new function requests generate the need to spin up additional compute resource increases latency however automated and rapid the process. This may be fine for many applications, but if you need high-performance with very low latency serverless is probably not an effective solution.
It is probably no surprise that the presentations and discussions at ServerlessConf in New York last May, indicated that most early adopters are start-ups building full value chain applications from loosely coupled, often off the shelf, micro-services and APIs. However, while enterprises aren’t likely to start trying to shoehorn monolithic legacy apps into a serverless environment, there was some evidence of experimentation with specific add on processes and new apps.
The theme that is emerging from our review of the Xebia Labs predictions is that there are many exciting new developments in Cloud and DevOps. Opportunities to develop and deploy new technologies and paradigms like serverless computing do exist.
However, IT and the business need to have a clear understanding of business priorities that inform where, when, how and if to migrate legacy applications or develop from new using new tools and techniques.
Being able to tell the CIO what works and what doesn’t in this new environment is crucial. It’s what we do for a living.
To book a meeting with Peter Borner directly, please do so here.