Blog

Search This Blog

Loading...

Wednesday, June 6, 2012

Spikability - An Application's Ability to Handle Unknown and/or Inconsistent Load

Neo Handling Load
I've been in many conversations recently about how an application can handle spiky behavior. More importantly, how to handle it without a bunch of wasted resources that sit idle most of the time just to handle the spikes. There are a bunch of applications where this is a very typical use case, for instance:

  • A blogging application may experience spikes on every post that goes out. 
  • A daily deal site like Groupon or One Kings Lane experience HUGE spikes whenever a new product is posted. One Kings Lane has the benefit of knowing exactly what time it will happen each day though: 8am every morning. 
Let's use the daily deal site for our examples because it is a well known problem for them. 

Solution #1: Have More Resources than You'll Ever Need

One way to solve this problem is to always have excess capacity waiting for the spikes. If you can estimate the maximum amount of simultaneous traffic you'll have on any given day, then you can just keep enough servers running to handle the maximum load. With this solution, you'll always want to have extra capacity above your maximum just in case because today might be your new maximum. 


Solution #2: Disable Features During High Loads

When traffic spikes and your current resources can't handle it, disable features or have a "lightweight" backup version of your application. Groupon had to do this and Google suggests this. This works because it essentially reduces your load. But then you've also stripped away a large portion of your application so this is not the best option.

Solution #3: Auto Scaling

When traffic spikes, quickly and automatically launch new servers to handle it. Sounds great!  But easier said than done.  First off, it takes a lot of effort to setup an auto scaling system. Second, it has to be able to spin up those extra resources very fast; too slow and your system may already have gone down. There's also a bit of waste until your auto scaler decides it's time to tear down those extra instances. And if your spikes are occurring at random times, you end up scaling up and down all the time. 

Solution #4: Use Message Queues

Queues are made for dealing with spiky load. When load is light, the queues are always empty and everything works as normal. When load is heavy, the queues start to fill up and you process the load as fast as you can based on the resources you have running. And you can launch more servers to eat away at the queues if they keep growing. 


The big benefit here is that you don't waste resources and money (#1) and you don't have to disable any features of your site/application (#2). 

Conclusion

Messages queues are a simple and very effective way of dealing with spiky behavior. There are a lot of message queue options out there to choose from. IronMQ is a cloud based message queue you can use with very little effort and will scale with you. And if you want on demand workers to be able to work through your queues, take a look at IronWorker for massively scalable task processing.