Defining the Elusive Serverless App

Topics in this article

Application Transformation continues to evolve and offer greater options. The newest source of disruption in software development and cloud native architecture is a concept called ‘Serverless’. In this model developers simply upload their code (or “Function”) to a Serverless cloud environment where it’s automatically run when necessary – things like servers, configurations, availability, provisioning, capacity, scaling are all handled by the platform.

Some other key characteristics of Serverless applications include:

  • Event driven – Serverless apps are architected to only run when invoked by an external event or trigger. Common examples of activation triggers include database updates, API gateway calls, user requests, or scheduled events.
  • Ephemeral – because they are invoked by triggers, Serverless applications don’t need to be ‘always on’ awaiting a request. Serverless apps are only brought up and down as needed.
  • Hosted – Serverless apps typically rely on 3rd party hosted platforms and environments, leading many to also use the acronym ‘Function-as-a-Service’ (FaaS) to describe Serverless apps.
  • Consumption based – as Serverless apps are ephemeral, pricing is often on a pure ‘pay-as-you-go’ consumption basis. Customers are only charged for the time the function is actually executing.

Of course the name ‘Serverless’ is pretty misleading – the application has to run on a server somewhere. While the model doesn’t eliminate the need for servers it does eliminate the need for developers to care about servers, operating systems and the runtime environment, which are effectively now outsourced to the service provider. The name Serverless refers to the fact that the model almost full abstraction between the developer and how software is actually deployed and run.

But when hearing the Serverless story a question instantly comes to mind – isn’t that what PaaS was supposed to do?

Conceptually yes, but in most cases PaaS platforms still require you to identify scaling parameters and thresholds for a given application. In a Serverless environment scaling is instead handled automatically by the service provider for each individual request.

In addition by design Serverless platforms bring the entire application up and down for each individual request (sometimes in the matter of milliseconds), something that PaaS runtime environments were definitely not designed to do.

Due to these facts, Serverless architectures are particularly well suited for use cases and applications that are:

  • Simple – Serverless tends to be a good fit for small, single purpose functions (think microservices) that are designed for a specific, well-defined task.
  • Unpredictable – services with highly variable, unpredictable, or ‘spiky’ demand patterns are compelling candidates due both to the auto scaling features as well as the consumption-based pricing model for Serverless.

As you can imagine Serverless provides an ideal platform for low-cost experimentation, not just with the actual application but also to better understand the economics of the function as well. Serverless provides a direct and easy way to understand the cost of an application on a per transaction or customer basis. Linking this to delivered value opens up a new way of building, analyzing and potentially monetizing services.

In addition to services for mobile apps, websites and IoT other interesting early use cases that have emerged for Serverless include:

  • Image processing – one common early use case is around image processing, including object and facial recognition as well as image analysis. In these scenarios the upload of a new image by a user or another application is used to either trigger image modification or algorithmic analysis based on the nature of the app.
  • Data analytics and ETL – another common use case involves invoking Serverless apps to run an ETL or processing action whenever a data set is updated. This includes use cases involving filtering and transforming data, which makes it particularly useful for log analysis where Serverless apps can be used to enhance and reformat raw log data for further analysis or reporting.

So what’s the catch?

Given their relatively recent introduction, Serverless applications also currently come with some significant limitations that are particularly relevant for the enterprise. These include:

  • Performance limitations – current service provider performance and implementation limitations around concurrency, latency and execution duration narrow the applicable uses for Serverless apps. For example with Amazon Lambda Serverless applications are automatically terminated if they run longer than five minutes. Another example is in-memory data caches which are emptied as each function is taken down, effectively eliminating it as a viable option for Serverless apps.
  • Service maturity – Serverless is still an emergent architecture and 3rd party platforms are still relatively immature. Those looking for expecting enterprise-class monitoring, logging, (debugging) or error handling for example should probably look elsewhere.
  • Vendor Lock-In – every technology choice involves lock-in at some level (even open source). Given the dependency of Serverless applications on triggers generated by other components, switching environments won’t be as easy as just porting your code to a new provider. It may necessarily involve migrating other components such as data stores, API gateways and message queues and other Serverless apps. Given these facts lock-in may be even more of an issue for customers than IaaS or PaaS.

Like Cloud Native and microservices, Serverless architectures in the near term are going to be a far better fit in the enterprise for greenfield development rather than legacy modernization. For some use cases it will be appropriate, for others not – as with innovation with any emerging or new paradigm the key is experiment with Serverless in targeted scenarios where it makes sense. Over time the boundaries between Serverless and PaaS are likely to blur just as they are currently with Containers, broadening the applicability of the model. In the meantime a focus on exploration and experimentation appears to be the best course of action in the enterprise.

Interested in learning more about how to drive Application Transformation using Serverless, Cloud Native and other disruptive software development architectures? Contact us to learn more about how Dell EMC Services can help.

Scott Bils

About the Author: Scott Bils

Scott Bils is the Vice President of Product Management for Dell Professional Services. In this role, Scott and his team are responsible for driving the strategy and growth of Dell’s Consulting, Education and Managed Services portfolio, and for Dell’s transformation practices in the areas of multicloud, applications and data, resiliency and security and modern workforce. Scott brings over 20 years of experience across Corporate Strategy, Technology Services, Product Management, Marketing and Business Development. In his prior role at Dell, he built and led the Digital Transformation Consulting Practice. Prior to Dell, Scott held executive roles at Scalable Software, Troux Technologies and Trilogy, and also worked at McKinsey and Co. and Accenture. Scott holds an MBA from the University of Chicago and a Bachelor’s in Finance from the University of Illinois Urbana-Champaign.
Topics in this article