We came across another great post by William Kennedy on running scaling out applications using Iron.io platform and Go programming language. He wrote an excellent post on using Go with IronWorker several months back.
William writes Going Go Programming and so he certainly knows what he's talking about when it comes to Go. We're big fans of William and his blog – maybe not so surprising since we're both fans of the Go programming language (you can hear about our experiences with go here, here, and here).
Queue Your Way to Scalability
|Going Go: Queue Your Way to Scalability|
The first thing I did when I started programming in Go was begin porting my Windows utilities classes and service frameworks over to Linux. This is what I did when I moved from C++ to C#. Thank goodness, I soon learned about Iron.io and the services they offered. Then it hit me, if I wanted true scalability, I needed to start building worker tasks that could be queued to run anywhere at any time. It was not about how many machines I needed, it was about how much compute time I needed.
Outcast Marine Forecast
The freedom that comes with architecting a solution around web services and worker tasks is refreshing. If I need 1,000 instances of a task to run, I can just queue it up. I don't need to worry about capacity, resources, or any other IT related issues. If my service becomes an instant hit overnight, the architecture is ready, the capacity is available.
My mobile weather application Outcast is a prime example. I currently have a single scheduled task that runs in Iron.io every 10 minutes. This task updates marine forecast areas for the United States and downloads and parses 472 web pages from the NOAA website. We are about to add Canada and eventually we want to move into Europe and Australia. At that point a single scheduled task is not a scalable or redundant architecture for this process.
Thanks to the Go Client from Iron.io, I can build a task that wakes up on a schedule and queues up as many marine forecast area worker tasks as needed. I can use this architecture to process each marine forecast area independently, in their own worker task, providing incredible scalability and redundancy. The best part, I don't have to think about hardware or IT related capacity issues.