What is Serverless Computing and Why is it Important

serverless-largeServerless computing has blown up in the past 6 months and along with all the excitement is a lot of questions. I’ll attempt to address some of the questions and talk about the pros and cons of serverless.

We created Iron.io five years ago to solve the problems that serverless computing solves. In fact we built the company on this premise. We were trying to solve a problem we were having at our previous company, where we had to setup a DIY data processing system for each one of our customers, then manage and monitor all these servers. We wanted (needed!) a system that allowed us to write code that could crunch a lot of data and have that run on an as needed basis on a system where we wouldn’t have to think about or manage the servers. And we wanted to be able to use that same system for all of our clients and have it easily scale out to many more.

So like any self respecting engineers, we went ahead and solved our own problem and that solution became Iron.io.

Back in 2012, our own Ken Fromm wrote a great article on ReadWriteWeb called “Why The Future Of Software And Apps Is Serverless” and 4 years later, it pretty much all still holds true today. With some gems like:

The phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them.

and

Going serverless lets developers shift their focus from the server level to the task level.

The funny thing about calling it serverless back then, was all the people that thought it was ridiculous. Just take a gander through the hacker news comments on that article, some entertaining stuff in there.

Back when we first started Iron.io, I actually wanted to pull a Salesforce.com and put this “No Servers”, crossed out circle image on our homepage. We didn’t end up doing it, but I think it would have made a good statement.
blank

Anyways, the future seems to have caught up and we’re really excited for all this momentum. On to the questions!

 

Does Serverless Really Mean there are No Servers?

No, of course not. The point is that you, as a developer, don’t need to think about them.

 

Yes, the term is serverless is misleading.

If you’re an ops guy that wants to run a serverless platform, you’re going to be thinking about a lot of servers.

Functions as a Service

A newer way to describe serverless is bubbling up and that is Functions as a Service or FaaS. Which, other than being a little corny, is a much more accurate description. A function is essentially a small program that does one small thing, an app on the other hand, does a LOT of things.

I’ll use serverless and FaaS interchangeably through this article. 

How is Serverless/FaaS Different than PaaS?

PaaS could be considered the first iteration of serverless, where you still have to think a little bit about the servers (how many vms do you need?) but you don’t have to manage them. Serverless goes a step further where you don’t even need to think about how much capacity you need in advance. 

The other difference, and the more important difference, is that you break down your app into bite sized pieces. Instead of the monolith app that you’d run on a PaaS, you break down your app into small self contained programs, or functions. For instance, each endpoint in your API could be a separate function. These functions are then run on demand, rather than full-time like an app running on a PaaS.

From an ops perspective, the benefit to breaking down an app into functions, is that you can scale and deploy each function separately from each other. For instance, if one endpoint in your API is where 90% of your traffic goes, or your background image processing code is what eats up most of your compute time, that one bit of code, that one function, can be distributed and scaled much easier than scaling out your entire application. More on that later.

 

What about Microservices?

It really seems like just yesterday that the microservices trend started. The idea behind microservices is to break down your monolithic application into small services so that you can develop, manage, and scale them independently. FaaS takes that a step further by breaking things down even smaller.

The trend is pretty clear, the unit of work is getting smaller and smaller. We’ve gone from monoliths to microservices to functions:

Monolith to Functions crop

There will probably always be a place for both microservices and FaaS. Some things you just can’t do with functions, like keep an open websocket connection for a bot for instance. Also, an API/microservice will almost always be able to respond faster since it can keep connections to databases and other things open and ready.   

Another thing to note is that by grouping a set of functions together behind an API gateway, you have yourself a microservice. So microservices and FaaS can coexist in a nice way. To an end user of your service, they don’t care if behind the scenes your API is implemented as a single app or a bunch of functions, it still acts the same. 

The Benefits of Serverless

Whether you are a developer using a service like AWS Lambda or Iron.io, or building your own serverless system, you should understand why anyone would go through the trouble of building software this way when the old way works just fine. The main reason from all sides is cost and efficiency.

From a developer perspective, there are a couple of reasons: 1) If you just want to write some quick and bit of code that can respond to events without going through the hassle of creating an entire app/API just to do some small thing, then FaaS is awesome. But you’ll usually end up needing more than a single function anyways and creating a single function app is pretty easy (look at Sinatra). 

The primary reason is 2) Efficiency / Cost

From a developers perspective, you only pay for the time that your function(s) are running and since you don’t have to run an app 24/7 anymore, this can be a good cost savings.  

From an IT/ops perspective, if you were to build and manage your own FaaS infrastructure (or be a FaaS provider like Iron.io), it’s all about optimization of resources which translates to cost. Cost and optimal use of resources is a huge reason to do serverless. If you are a big company with a bunch of apps/APIs/Microservices, you are currently running those things 24/7 and they are using resources 100% of the time, no matter if they are in use or not. With a FaaS infrastructure, instead of running apps 24/7, you can execute functions for any number of apps on demand and share all the same resources. Theoretically, you could reduce waste (idle time) to almost nothing while still providing fast response time. For a FaaS provider, this cost savings is passed up to the end user, the developer. For an enterprise, this can reduce capex and opex big time. I’ll explain this in more detail in the next section.

 

Resource Utilization and Time Slicing

When you have a typical application, you deploy it and it requires some set of resources 24 hours a day, 7 days a week. It requires memory, CPU and disk whether or not anyone is using it. If your app needs 500 MB of RAM, then you need to set aside a server or a part of a server (VM/container) with 500MB of RAM 100% of the time because that app/api needs to be running and ready to respond immediately when an event/request happens. Heck, you probably need 3x those resources to ensure some redundancy. 

Breaking down an app into discrete functions, a small bit of code that needs to run based on an event, enables optimal resource utilization. Taking the example from above, if your function needs 500MB of RAM to run and it runs for 1 second, you only use those resources for that 1 second. When it’s done those resources are released for the next function to run. This is the big win. This is why it can be so cost effective. It’s essentially time slicing in the cloud or distributed time slicing. 

Now that I’ve explained it in words, let me draw you a diagram to make it even easier to understand. Since I’m not much of an artist, I’ll draw it in ASCII. 


The main point to note here is that each server (or VM or container) is only usable by one app, whether it’s idle or active.

 

And now resource utilization for functions:

 

Notice that we’re running a bunch of apps across the same three servers with essentially no idle time. 

 

 

The Tradeoffs/Drawbacks of Serverless

Serverless is not all roses. With no servers comes great responsibility…

Complexity increases. The smaller we take things, the more complex the entire system becomes. The code on a per function basis gets more simple, but the system as a whole gets more complex. This is the same issue with microservices. If you break an app into 10 microservices let’s say, you know have 10 different apps to manage! Managing 1 monolithic app was a piece of cake compared to managing 10 smaller apps.

Now let’s say you break your monolith down into 100 functions, well… you get the picture. People are creating tools just to help you manage this added complexity like the Serverless Framework. It was literally created just to help you organize, upload and setup your functions on Lambda. The fact that you need a tool to do that says a lot. 

Which leads into another problem: lack of tooling. There isn’t much out there that can help you manage and monitor your functions. The monitoring tools of today were built for long running apps, not programs that run for a fraction of a second. 

I think these problems will be solved with time though, as all problems are. If you’re an entrepreneur looking for something to start, this would be a good area to look at. 

 

How Can you Go Serverless Today?

There are a few services offering serverless functionality today: Amazon, Google, Microsoft and Iron.io. Iron.io runs on all three of those clouds and we’re the only ones enabling on-premise serverless computing so you can get it behind the firewall (and we make that easy too). Also, we support Docker so you can use any programming language and any system tools you want.

We also have some big announcements coming soon to complete our serverless story that will knock your socks off. 

Conclusion

Serverless has the potential to revolutionize the way that we write and deploy code. We’re no longer thinking in terms of apps and API’s or long running processes, instead we’re thinking in terms of functions that can respond to requests and process data based on events. We’re thinking of new deployment models and new systems to manage small chunks of code that run for short periods of time. 

Five years ago, we planted the seeds for a serverless future, and that future has arrived.  

1 Comment

  1. blank ΔNDRΞW on November 27, 2017 at 9:31 pm

    Great article. Thanks for the great detail and comparison/contrast. Haven’t been seeing too many articles with through pros AND cons. Keep it up!

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.