Search This Blog


Monday, September 30, 2013

IronCast 3: How to rapidly prototype locally with IronWorker - IronWorker 101 Part 3/4

In a series of four IronCasts, we will provide a high-level introduction to using IronWorker, an easy-to-use scalable task queue. In this series, we will be using an example application which is written in Rails. However, the same concept applies to every language or framework.

In this video, we will show you how to prototype and develop your IronWorker locally before sending it to the cloud.

We don't want to have to keep uploading our code to the cloud to test it. We want to be able to test it locally.

Essentially, there are 2 things that have changed between local environment and the cloud environment. 

Wednesday, September 25, 2013

7 Reasons Webhooks Are Magic

Around the office, I’m known as a bit of a webhook enthusiast. When asked what my favourite features of the platform are, our webhook support tend to top my list. (Did you know you can create a message or a task using webhooks, and use push queues as webhooks?)

I love the flexible, open architecture webhooks enable. They remind me of Unix pipes: pass in some data as a string, and any program that knows how to get the data out of the string automatically gains the ability to use that data. That’s so cool. But sometimes, I forget that not everyone knows how amazing webhooks are. I go to hackathons and show them to people, and can almost see their brain as it explodes. I watch people poll APIs or create convoluted connections, and I cry a little on the inside.

I want to show off the power of webhooks for any doubters out there. I want everyone to see the awesome potential this architecture enables.

For the uninitiated, webhooks are simply a pattern for an evented web. All they are is a promise by an application or API: “when this thing happens, I’ll send this HTTP request with this data.” That simple promise opens up a lot of opportunities, though. It enables the web to start being aware of events, to respond to things without user interaction. It makes the web capable of pushing information to its users, instead of waiting for users to ask for information.

I think the easiest way to explain this is with examples:

1. Handling Text Messages

The Twilio API is one of my favourite APIs of all time, because they make it super easy to marry the physical world with the software world. I love that a few lines of code can make the phone in my pocket start buzzing. I love that I can interact with my application the same way I do with most my friends: text message!

But the best part of all is that Twilio lets you set up webhooks for when you get a text message or phone call. This means that when someone texts or calls your Twilio number, an HTTP POST request will get sent to a URL you define with information about the text or call, letting you process, handle, or reply to it automatically.

Setting up Twilio webhooks for SMS.
If Twilio didn’t have webhooks (the horror!) you’d need to either ask them every few seconds if there was a new text message or phone call – which, let’s face it, your users would notice – or you’d need to set up a dedicated, always-on TCP connection that Twilio could pass information down. Nothing about that sounds fun.

2. Handling Emails

Sending and receiving emails is hard. SMTP, IMAP, POP3, DKIM, spam… you really need to be an expert to get it right. Fortunately, there are experts for hire. SendGrid, MailChimp, and Mailgun, among others, provide simple APIs for you to send and receive email, and their experts take care of making sure the mail actually reaches the other person’s inbox. Neat.

But most of these providers also provide webhooks for when you receive an email. For example, you can set up to send a POST request to a URL you define whenever an email is sent to it. Better yet, it will include information about the email, making receiving email exactly the same as receiving HTTP requests. A lot of these services can also send webhooks when an email is opened, or a link in the email is clicked. You can build some great user experiences around this kind of evented web. Experiences that don't rely on users browsing the site, but instead can go to the user wherever they are.

Setting up SendGrid webhooks.
If they didn’t have webhooks, you’d need to set up your own email server software and manage it yourself. Which, believe me, you absolutely do not want to do. Or you’d have to write your code to use IMAP or POP and SMTP, which is no fun. If you’re writing a web application, you’re already using HTTP. No sense in adding another protocol.

Your only other option would be polling their API to see if there are emails to process yet. Polling is no fun. Don’t poll, if you can avoid it.

Tuesday, September 17, 2013

IronCast 2: What is a worker file? - IronWorker 101 Part 2/4

In a series of four IronCasts, we will provide a high-level introduction to using IronWorker, an easy-to-use scalable task queue. In this series, we will be using an example application which is written in Rails. However, the same concept applies to every language or framework.

In this video, we will show you how to write your worker file to declare and package dependencies for your IronWorker.

Friday, September 13, 2013

How Vextras Integrates Multiple API's with

[This post is part of a series of customer success stories that Chad Arimura is putting together highlighting key customers and how they are using to do some pretty big things.]

Vextras develops applications that help e-commerce store owners simplify their business processes. Their solutions connect online stores with other great SaaS apps like MailChimp, Highrise, Xero and Mandrill for a seamless transfer of information from one platform to another. In addition, they are developing an entire suite of tools that eliminate manual processes and allow store owners to run their businesses more efficiently.

What problem did Vextras face before

Ryan Hungate
Lead Developer, Vextras
Vextras operates as a service provider for various e-commerce integrations. We were faced with an unpredictable volume of processing that could happen at any second, on demand, based on our customers minute to minute success. We needed support for queueing up jobs quickly and reliably as they came in, as well as the ability to deal with failed jobs as they happened.

Describe your Architecture

We use Rackspace Cloud as our server infrastructure, load balanced, and built to scale. We have incoming API URL’s for our customers that notify us that a job needs to be taken care of. At this point the relationship with Iron becomes crucial.

We decided to utilize Iron's Rackspace MQ Cluster and Push Queues. This reduces the load on our servers and allows us to simply listen for messages coming off of the queues rather than constantly polling for jobs.

To reduce bandwidth, we save the messages in our local DB and simply pass on an identifier to IronMQ for reference. When IronMQ pushes the message back to us, we use the identifier to pull the job data from our DB and handle it. It may seem like overkill, but it's a great workflow, super light on bandwidth, reliable and scalable. We're definitely glad we architected it this way.

Results of Using on Rackspace

By adding IronMQ, we were able to look at our application on a higher tier and separate the critical processes out of the main loop. Balancing workloads and workflows for an application is a really daunting task when you're not just sending an email on a trigger. We can now monitor the jobs easily and even retry on failure without having to build that into our codebase.

This retry capability is probably the most important feature for us  when a job fails the first time, IronMQ will automatically retry based on count and retry delay settings. This really helped us because our jobs work with external API's and we all know how that goes, you can't count on every request working as planned. is a perfect fit for us because of its natural framework support, feature richness, and simplicity!

Simplify your life with IronMQ today, sign up for a free account.

To see how Vextras can put your store on Autopilot, check out

Wednesday, September 11, 2013

Running Go on IronWorker (reposted)

Going Go: Running Go Programs in IronWorker
We came across a great post by William Kennedy on running Go tasks in IronWorker. It's a detailed article that had a few of our developer evangelists a bit envious in the care and detail it takes in walking developers through the process. (Using IronWorker is not difficult at all but the post still explains things at a simple, smooth pace.)

William writes Going Go Programming which is a great resource for things Go related.

We're fans of the Go language (and Going Go). Not surprising, given we've been using Go in production for much of our backend for over two years (you can read about that here and here). We also run the GoSF meetup group which lets us connect with top Go developers in the city and around the world.

We can talk at length about IronWorker and its ability to run dynamic languages such as Ruby, Python, Java, Python, Node.js, or .NET as well as binaries and compiled languages like Go. But we do that a lot and so we'd like to pass the baton to William and share his thoughts.

G+: William Kennedy

Running Go Programs In IronWorker

[See the full blog post here] has a product called IronWorker which provides a task oriented Linux container that you can run your programs inside. If you are not sure what I mean, think of this as having a temporary Linux virtual machine instantly available for your personal but short term use. IronWorker allows you to load your binaries, code files, support files, shells scripts and just about anything else you may need to run your program in the container. You specify a single task to execute, such as running a shell script or a binary and IronWorker will perform that task when requested. Once the task is complete, IronWorker will tear down the container as if it never existed.


[some information about configuring local environment]


The Test Program

I have built a test application that we are going to run in IronWorker. To download the code and the IronWorker support files, run the following commands:

  cd $HOME
  export GOPATH=$HOME/example
  go get

This will copy, build and install the code into the example folder under $HOME. The program has been written to test a few things about the IronWorker Linux container environment. Let's review the code for the program first and test it locally.

Monday, September 9, 2013

IronCast 1: Introduction to IronWorker - IronWorker 101 Part 1/4

In a series of four IronCasts, we will provide a high-level introduction to using IronWorker. IronWorker is an easy-to-use scalable task queue that gives cloud developers a simple way to offload front-end tasks, run scheduled jobs, and process tasks in the background and at scale.

These videocasts will cover core concepts including:
  • Deploying a worker
  • Writing worker files to declare dependencies
  • Test and prototype workers rapidly locally
  • Connecting to a cloud development database

We will be using an example application which is written in Rails. However, the same concept applies to every language or framework. IronWorker can handle almost every language including binary files and so if you program in PHP,Python, Node.js, or other languages, don't worry, we have client libraries and examples to show you the way. It should also be possible to convert this example to the language of your choice without much effort. Please refer to further documentation here.

In this video, we will show you how to upload and run a worker in the IronWorker environment. We will deploy a worker that makes external API calls in the background in four simple steps:

Step 1: Worker controller code
This code appears within your application logic and queues up the worker task to run in IronWorker.
class SnippetsController < ApplicationController

  def create
    @snippet =
      @client ||= => ENV["TOKEN"], :project_id => ENV["PROJECT_ID"])
                           "database" => Rails.configuration.database_configuration[Rails.env], # This sends in database credentials
                           "request" => {"lang" => @snippet.language,
                                         "code" => @snippet.plain_code},
                           "snippet_id" =>
      redirect_to @snippet
      render :new
Step 2: Worker code
This worker will get uploaded to IronWorker and will run in the background asynchronously when invoked. This worker makes an external API request and then saves the results into the database.

uri = URI.parse("")
request = Net::HTTP.post_form(uri, lang: params["request"]["lang"], code: params["request"]["code"])

snippet = Snippet.where(:id => params["snippet_id"]).first
snippet.update_attribute(:highlighted_code, request.body)
def setup_database
  puts "Database connection details:#{params['database'].inspect}"
  return unless params['database']
  # estabilsh database connection
Step 3: Worker file (.worker)
This file declares your IronWorker’s dependencies so that we can package up the dependencies and make them available to your worker.
runtime "ruby"

# include postgresql and activerecord
gem "pg"
gem "activerecord"

exec "pygments_worker.rb"

# Merging models
dir '../app/models/'

full_remote_build true # Or remote
Step 4: Uploading to IronWorker
After you install the IronWorker CLI. The CLI instruction, iron_worker upload [WORKER NAME] looks for an iron.json where you credentials should be stored. Therefore, if you stored your iron.json in your workers folder, you should first cd into that folder. [WORKER NAME] is the file name of the .worker file.
cd workers
iron_worker upload pygments 
And that’s it! Four simple steps and you have deployed your first IronWorker. Once the worker has been uploaded, you can queue up tasks from the application or you can queue up tasks from the CLI. In the following three episodes of IronCast, we will dive into the details how to construct your own worker file, how to prototype with IronWorker locally and how to have your workers connect to your cloud hosted database.

But for now, you should have enough to get up and running on IronWorker. Sign up for a free account and run this example. Or you can check out other examples on Github as well as dive into more details on the service within our Dev Center.

Wednesday, September 4, 2013

How Untappd Reduced App Response Times with

[This post is part of a series of customer success stories that Chad Arimura is putting together highlighting key customers and how they are using to do some pretty big things.]

Untappd is a mobile location based service for beer lovers that allows users to log, rate and discover new beers, venues and people. By tracking your history and beer ratings, Untappd can recommend beers that are similar based on your taste profile. You can also see what your friends are drinking all over the world as well as find new beers and locations near you!

I recently spoke with Greg Avola, CTO and co-founder of Untappd. Greg is a coder at heart and has a special love for building fast scalable web applications. This is his story... help beer drinkers celebrate their passions around the world.

What problem did Untappd face before Iron?

Untappd is a check-in service, so we have a lot of background processing that needs to be performed on every single check-in. We were doing all this while a user was checking in, which ended up increasing the time it would take for a user to check-in. This became a huge problem as we started to grow. We needed a scaleable way to deploy our tasks to run asynchronously without the user waiting for these jobs to be completed. These tasks became even more important to us as our content feed became more dependent upon these tasks. Our activity feed was suffering as users couldn't see their checkins as quickly, due to the high response times of their check-ins.

Where did previous solutions fall short?

The obvious choice was to use a message queue system that hooked into our own worker system, however with our small development team, managing and deploying these job become a big burden. It also became clear that we needed to create a scalable architecture - but unclear as to how our small team could manage this. We wanted to focus on building the best product, not building a message queue system that was written in our application's language.

Enter on Rackspace

We currently use Rackspace to host our application and IronMQ / IronWorker exclusively for all our message queues needs. Since IronWorker provides the flexibility of being able to create "jobs" in our our language (PHP), it made writing and deploying workers very easy. In addition - IronWorker is built on a highly scalable infrastructure which means we don't have to worry about the service being able to handle our massive amount of tasks. We currently have around 15 different workers that run through the site and perform calculations and database writes based on when a user checks in.


Using has helped us focus on building a great social platform without having a worry about managing our queues and workers at scale. We've decreased our apps response time by offloading many heavy operations into asynchronous IronWorker tasks instead of waiting for the response to complete. We've also been able to improve our queries, due to the ability to offload insertion to the workers to make the checkin process as fast as possible for our users. If the users can check-in faster, that makes them happy, which makes us happy.

We couldn't have said it better ourselves!  Cheers!

To eliminate the burden of setting up and managing your own queuing and worker system, sign up for a free account today.

And to start getting beer recommendations now, check out