Search This Blog


Friday, September 19, 2014

Orchestrating PHP Dependencies with Composer and IronWorker

Package your dependencies on IronWorker using composer
This is a tutorial describing how to include and use the PHP package management tool Composer with IronWorker.

Composer is a tool for dependency management in PHP. It allows you to declare the dependent libraries your project needs, and it will install them in your project for you. Packagist is the main Composer repository. It aggregates all sorts of PHP packages that are installable with Composer.

Installing Using Composer locally

1. run  composer install command (this downloads composer.phar file locally)
$ curl -s | php
2. define packages and versions in a composer.json file
    "require": {
        "vendor/package": "1.3.2",
        "vendor/package2": "1.*",
        "vendor/package3": ">=2.0.3"
3. run the installation command
$ php composer.phar install
4. your packages will be installed in a /vender folder locally and can be loaded in your php script via
require 'vendor/autoload.php';

Using Composer on IronWorker

Its nearly as simple to do the same in a IronWorker.

1. include both the local compser.phar and composer.json in your .worker manifest. Also include a build script for us to run on our servers.
runtime "php"

file "composer.phar"
file "composer.json"
build "php composer.phar install"

exec "my_script.php

2. upload via our command line tool
$ ironworker upload <name_of_workerfile>
This is what you will see when running "iron_worker upload test.worker

BOOM! its that simple! Your packages will be built on our servers and accessible via to your script.

Composer growth!

With 38,464 packages registered and close to 35,000,000 packages installed each month, it is quickly becoming the standard in PHP package management by popular frameworks such as LaravelSymfonyYiiZend, and more. We also heard recently that EngineYard a leader in application management is sponsoring Composer with a 15,000 community grant!

We are excited to see the growth that Composer has received thus far, and are look forward to seeing our own users take advantage of this wonderful tool and the IronWorker platform together.


Props go out to Van der Stock, an awesome fan of IronWorker is actually using IronWorker to relay updates to users when their composer dependencies are out of date! Get updates when your Composer dependencies are out of date! send him regards at @dietervds

More flexibility for our developers
Making deployment and dependency management painless is a top priority for our team. Supporting a diverse range of languages, frameworks, and packages provides our users what they need to make their implementation successful.

We'd love to hear feedback or to even feature tutorials written by you! send me a message at:

Wednesday, September 10, 2014

How Uses Node.js and IronWorker to Handle Their Background Processing

The following is a guest blog post by Thomas Shafer describing how deploys their workers within to handle all of their background processing. is the only true developer-focused live streaming service. We offer APIs and SDKs for all mobile platforms and language frameworks and let developers build and ship live-streaming capabilities with just a few simple steps.

We have many background tasks to run and use for all of our long-running and resource-intensive processing. These types of tasks include video stream archival processing, customer emails and log processing, and bandwidth calculations.

Over the course of working with the IronWorker service, we developed a few approaches to make it easy for us to integrate distributed processing into our Node.js application framework and maintain consistency with our main service across the full set of workers. The primary strategy is to use a single shared worker space.

Single Shared Worker Space

As an example of the approach we use, when we process logs from our edge servers, we need to gather log file data and attach each bandwidth entry to a user's account. To do this, we need credentials to access multiple remote data sources and follow some of the same logic that our main dashboard application uses.

To maintain logical consistency and a DRY architecture, we upload many shared components of our dashboard application to IronWorker. Our dashboard application shares code with the central API application to ensure logical consistency – as Wikipedia puts it "a single, unambiguous, authoritative representation within a system".

Because some of our background tasks require shared code between our dashboard application and our API, we decided to structure our IronWorker integration with a single .worker class, titled MainWorker.

MainWorker Serves as the Primary Worker

We use one worker to perform a number of tasks and so it needs to be flexible and robust. It needs to be able to introspect on the possible jobs it can run and safely reject the tasks it cannot handle. One way to make it flexible is to unify the payload and reject any attempts to schedule a MainWorker that does not follow the expected payload format.

A good way to enforce a predictable format is to, once again, share code. Whether it's the dashboard, API, or another MainWorker, they all use the same code to schedule a job.

Our MainWorker payload follows the following format:

      configuration: {
        // configuration holds sensitive variables
        // such as redis credentials, cdn access codes, etc.
      jobName: "",
        // The name of the job we want to run
        // MainWorker understands which jobs are acceptable
        // and can reject jobs and notify us immediatly on inadequate jobNames
      source: "",
        // source is the originator of the request.
        // This helps prevent unwanted scheduling situations.
        // An example is preventing our API application
        // from scheduling the job that sends out invoices at the end of the month.
        // That job is reserved for IronWorker's internal scheduler.
      jobPayload: {
        // the payload to be handled by the job, such as model ids and other values.

The jobs folder we uploaded contains the code for every specific job and is properly included by the MainWorker, which is written in node. Here's a look at the .worker file for MainWorker

Example of's MainWorker.worker file 

runtime 'node'
stack 'node-0.10'
exec 'main_worker.js'
dir '../models'
dir '../config'
dir '../node_modules'
dir '../lib'
dir '../jobs'
dir '../main_worker'
name 'MainWorker'

Benefits of Our Approach

After working with this setup for a while I'm convinced the advantages of a single shared space is the way to go.

Continuous Deployment

By throwing our IronWorker jobs into the same codebase as our API and dashboard application, I know our logic will be consistent across multiple platforms. This allows us to integrate IronWorker with our continuous integration server. We can update every platform simultaneously with the most up-to-date code. With this approach, there is no way that one-off untested scripts can make their way into the environment. We update code on through our CI suite and it's up to the developer, code reviewers, and our continuous integration server to validate our code. Everyone has visibility into what is on the platform.

Consolidated Reporting

By running all of our jobs through the MainWorker, we know each new worker will gather metrics and handle error reporting out of the box. We don't need to figure out how each new worker will handle errors, what the payload will look like, etc. Enforcing a single convention leads to us focusing on the internal logic of the jobs and getting things shipped.

Flexible Scheduling

The job payload has a rigid structure but we can share the library for scheduling jobs. That library will be responsible for sending the appropriate structure with the necessary configuration variables, jobName, source, and jobPayload.

One Drawback to the Approach

There is a drawback with using a single shared space for our workers. When we look at jobs, whether running or queued, all we see is "MainWorker, MainWorker, MainWorker". We cannot use the dashboard to tell which jobs are taking a long time and therefore lose some of the usefulness of the dashboard. (Note: If IronWorker were to allow tags or addition names that would go along way towards giving us visibility. I hear it's on the roadmap so let's hope it makes in it sometime soon.)


Deploying a shared environment to has enabled our development team to focus on delivering customer value in a rapid and high quality manner. We can easily test our job code, ensure has the most up to date code, and handle fixing any production errors promptly.

About the Author
Thomas Shafer is a co-founder of, the only developer-focused live streaming service available today. He is also a founder of Giving Stage, a virtual venue that raises money for social and environmental change. (@cine_io)

To see other approaches to designing worker architectures, take a look at how Untappd uses a full set of task-specific workers to scale out there background processing in this post. Also, be sure to check out this article on top uses of IronWorker.

Tuesday, September 9, 2014

Message Queues for Buffering : An IronMQ and Python Case Study

Using IronMQ and Python to as a Buffer between Systems
Connecting systems and moderating data flows between them is not an easy task. Message queues are designed for just this purpose – buffering data between systems so that each can operate independently and asynchronously.

Here's a short article on using IronMQ to buffer a CMS system from real estate data from a listing service. The app developer uses Python for the bridge code between the service and the CMS system.

Here's an excerpt from the post:
Building a System with IronMQ and Python 
One of my most recent projects was writing a system to deliver real estate listing data to a content management system. Since the listing data source was bursty and I wasn’t sure how the CMS would handle the load, I decided to use a message queue, where the messages would have a JSON payload. Message queues are great at decoupling components of a system. 
For the queue, I used IronMQ. The company already was using it, it has a free tier (up to 24 messages a second), the service has been stable and reliable, it has great language SDKs, and setting up a durable message queue is something I’d rather outsource...
I wrote the bridge code from the listing database to the message queue in python. The shop was mostly Java and some Python, and Python seemed a better fit for a small ‘pull from here, push to there’ application... 
[F]or this kind of problem, Python was a great solution. And I’ll reach for IronMQ any time I need a message queue. This pair of technologies was quick to implement, easy to deploy, and high performance wasn’t really a requirement, since the frequency of the listing delivery was the real bottleneck.
          Read the full post >> 

About the Author

Dan Moore is a developer of web applications. Since 1999, Dan has created a variety of web sites and web applications, from ecommerce to portal to user self-service. He is competent in PHP, Java, perl, SQL, GWT, and object oriented design. 

For other ways that you can use message queues, check out the following post from our archives.

Top 10 Uses For A Message Queue
We’ve been working with, building, and evangelising message queues for the last year, and it’s no secret that we think they’re awesome. We believe message queues are a vital component to any architecture or application, and here are ten reasons why:

See more >>

Monday, September 8, 2014

A Better Mobile Compute Cloud : NodeJS + (repost from ShoppinPal)

There are number of tools for creating mobile apps but the one area that can be challenging is handling the background processing that takes place within mobile applications. 

A popular mobile app, ShoppingPal, is using to handle its background processing needs with great results. They wrote a recent post on their success in moving from Parse to

Here's an excerpt:
We faced a challenge where incoming inventory updates for products weren’t processed in real time via parse triggers anymore... It was clear we weren’t going to grow if we stuck with Parse (background jobs) for our next set of retailers. 
ShoppingPal + : Better Background Processing
That’s when we ran into Iron and what a lucky coincidence that was! 
They had queuing, they had workers, they had a default queue attached to their workers, they had public webhooks that would allow posting directly into a worker’s own queue. 
We haven’t looked back since and if you’re finding worker based queuing and execution becomes a beast for your project then slay it with Iron.  

About the ShoppingPal

ShoppingPal provides mobile commerce capabilities that lets local retailers and online web sites offer state-of-art mobile storefronts.

About the Original Author

Pulkit Singhal is a co-founder and CTO of ShoppingPal. He wears many hats ranging from UI design and development to optimizing back-end architecture. He is an avid blogger on technical subjects and is active in a number of open source communities and forums. (#Pulkit)

For another story on using as a mobile compute cloud check out out post on the widely popular Untappd mobile app.

How One Developer Serves Millions of Beers: Untappd +
Untappd provides a mobile check-in application for beer lovers. Their application has been downloaded by over a million users and on any given night, they can register over 300,000 individual checkins and social sharing events...


Thursday, September 4, 2014 hosting CoreOS meetup – Speakers include Brandon Philips from CoreOS and Sam Ward from will be hosting a CoreOS meetup on this Monday, Sept. 8th. Brandon Philips, CTO of CoreOS, will be a speaker as will representatives from DigitalOcean and Citrix.

Sam Ward, Senior Ops Engineer at, will also be giving a talk. We're at the early stages of using CoreOS but we're liking what we're seeing. Here is a description of what he'll cover:

8:00 - 8:30 pm + CoreOS  
Sam Ward will give an overview of's evolving operations environment and how CoreOS fits these requirements. He'll comment on the technical merits of CoreOS and discuss's use of Docker to ship and manage their cloud service apps and worker environments.  He will also discuss consistency, high availability, and fault tolerance as factors of application design, and how CoreOS makes it easy to bake these properties into your application. 

CoreOS Meetup

Date:  Monday, September 8, 2014
Time: 6-9pm
Location: Heavybit Industries, 325 9th Street, San Francisco, CA

More Details: CoreOS + DigitalOcean Meetup

Monday, August 25, 2014 adds as an endpoint for message queueing + = Integration Goodness
IronMQ has just been added as an endpoint for message queueing within provides a single gateway that lets developers, marketers, and analysts pipe logs, metrics, user actions, and other key app data to hundreds of other applications and tools.

The API acts as a control for other analytics tools, allowing developers to integrate one single API in place of many.

Using's API lets an engineer implement tracking once within an application and then it will automatically translate/implement every third-party tag via their gateway. The net effect is that a developer or marketer can just push an "on" button to enable a new tool. + Message Queueing

There are a number of key advantages for using message queuing in combination with For example, in some situations, the data streams or webhook requests coming out of the Segment API can overload systems on the receiving end.

In these cases, IronMQ is a great solution because it can act as a buffer. It also keeps events in FIFO, has a storage duration of up to 30 days, and provides developers with the ability to build out custom workflows.

We'll be going deeper into the subject in subsequent posts and dev center articles but here are just a few of the use cases why using a message queue as a endpoint can make a lot of sense.

•  Buffer and persist data for processing at a later time.
•  Extract, translate, and load data into tools such as AWS's Redshift, Salesforce, and other custom tools.
•  Process data on-the-fly asynchronously using IronWorker.

To learn more about how message queues can help increase system reliability and handle data spikes, check out this article on top uses of a message queue.

To integrate IronMQ into, head to their Integration Center for simple instructions that will get you up and running in minutes.

Tuesday, August 19, 2014

How One Developer Serves Millions of Beers: Untappd +

Untappd – The Mobile App for Beer Lovers
Untappd provides a mobile check-in application for beer lovers. Their application has been downloaded by over a million users and on any given night, they can register over 300,000 individual checkins and social sharing events.

The Untappd application lets users record their beer selections, share their likes with friends, win rewards, get recommendations, and participate in a shared passion of beer with others around the world. A solid design, a fun set of features, and a responsive application are just a few of the reasons they’re one of the fasting growing entertainment-based social communities.

What’s even more impressive about Untappd is that it’s just a two-person company – a designer and a developer, both of whom have other jobs and are doing Untappd on the side. This is a story on how they got started and the work that goes on behind the scenes to power their success.

The Untappd Story

Greg Avola and Tim Mather met over Twitter six years ago when Greg was looking for a collaborator for a Twitter mobile app. They ended up working together on the app and then proceeded to take on several other projects as a designer/developer combination. In early summer of 2010, Tim came up with the idea for a check-in system for beer drinkers. The idea mapped well with Greg’s interest in beer and so they quickly created a mobile app and got to market by the fall.

The Untappd app has evolved a lot since the early days but the main premise is the same – users check-in at locations and check-in on the beers they’re drinking. For each check-in, they become eligible to win badges and receive promotions. They can also get real-time recommendations for beers based on their location.

The team works closely with breweries and beer venues to increase the connections that users have with their favorites beers. They help breweries and other partners create badges and other promotional elements for beers and events. The badges are hugely popular and are posted and shared widely within the app and across social media.

Checking In a Beer
Given how important up-to-date information about beers is, they’ve created what will soon become one of the largest open-source databases on beers in the world. It’s moderated by over 40 volunteers who help clean up information and de-dup entries. They offer free API access for developers and have the ultimate goal of making it the most widely used libraries about beer.

Registered users top over a million and they service over 300,000 check-ins on weekend nights and have processed over 50M events (the majority of them using Users love the Untappd app and use it to keep track of their beer, discover new favorites, meet new people, and find new places of interest.

The Untappd app is a model that works – a fanatical user base, an app that provides rewards, hundreds of happy partners, and an almost limitless opportunity.

Behind the Scenes of the App

The app framework for Untappd is that of a mobile client, a set of app servers connected to databases, and a large async/background processing component. They make use of a LAMP stack with PHP serving as their primary language. They use MySQL as their primary database for transactions, MongoDB for their recommendation engine and activity feeds, Redis to store all the counts for beer/user/brewery/venue, and for their background processing and as their mobile compute engine.

Recent Beers
When users check in to Untappd, there are a number of transactional events that take place. The user account gets updated and the check-in gets posted to Twitter, Facebook, and/or FourSquare. If a photo is uploaded, it gets processed. Check-in parameters get filtered for location and venue and then piped it into their MongoDB clusters that power their local recommendation capability. All in all, there can be up to 10 different events taking place for each location or beer check-in.


Initially, the check-in processes were being handled as a large batch job after hours at night. Because actions were being posted well after the actual event, the check-in process obviously wasn’t as responsive enough for their users as they needed. The Untappd team then moved these actions to the check-in response loop. That lasted for a little while as it resulted in a more responsive check-in but it quickly showed signs of strain. On heavy nights, the Untappd main app servers would start to melt because they were being used to process all the actions for each check-in, in addition to serving pages and providing query responses.

This tightly coupled serial approach also resulted in users having to wait for each process to start and finish in sequence. The delayed response times began having noticeable impacts on engagement. It was taking much longer to check-in as the app wouldn’t return for up to many seconds at a time. Users were getting frustrated and so they were not checking in for the second beer or the third.

Serial Processing Events at Check In = Slow Response Times

The general experience was also not feeling real-time enough for users because they wouldn’t see tweets until much later, and the information they were receiving from the app after a check-in was not as relevant as they might expect. Recommendations for other beers, for example, were out of date because the database wasn’t getting new beer inserts in a timely manner, and notifications of nearby trending places were not being sent out quickly enough to be relevant.

To keep user engagement high and their user base growing, they needed find a solution to their check-in problem. They turned to to do so.


To make their application more responsive and scalable, Untappd move their event processing to as a combination of IronMQ and IronWorker. Each check-in event is sent to a queue within IronMQ and then routed to workers within IronWorker. The processing runs outside the user response loop, which speeds up check-ins as well as provides the ability to handle any and all spikes in traffic.

Trending Events
Using, Untappd has been able to reduce the time to average check-in time from over 7 seconds to 500ms. They’ve also eliminated the need to manage infrastructure for this part of their app and given themselves an almost unlimited ability to scale their processing.

Continual Event Processing

The way the event flow works is that they put a check-in event onto a single queue and then that fans out to multiple queues – with each sub-queue controlling a different action, such as posting to social media or updating the recommendation engine. Multiple workers spin up soon after a check-in happens and so by the time the user has laid down their phone and sampled their beer, every action is either in process or has completed.

Wednesday, August 6, 2014

Go, IronWorker, and SendGrid at Gengo (a repost)

Shawn Smith from Gengo recently wrote a post on their use of Go, the programming language that we also use at for our backend services. (Gengo is a popular people-powered translation platform based in Tokyo and San Mateo.)

The post discusses several of the apps where they're using Go including a deployment app and several conversions of PHP and Python apps. The one that caught our attention is their use of Go with IronMQ and SendGrid to send out large volumes of emails to their user base.

Here's an excerpt from the post:
We created a service to send emails that uses‘s queue service, IronMQ. We call this the Email Consumer, which pulls JSON payloads off of a queue before rendering and sending the email that matches the ID in the payload.  
[When] a new customer signs up on our website, our web application puts a JSON payload with an ID onto the queue. The Email Consumer consumes the payload and looks up the email subject for the email with the given ID, also rendering the template with the given data. In this case, the data is simply the user’s name. It also localizes the strings in the template based on the language code provided (in this case, it’s Japanese) and sends the email via SendGrid, welcoming the customer to Gengo.

The lines that we like come in the summary of the app:
"Moving all of our emails into one place helps us easily make consistent style and copy edits. We send over 50 different emails to customers and translators through the Email Consumer, and to date it has sent over 500,000 emails without a problem."
– Shawn Smith, Go Developer at Gengo

Doing work in the background using IronMQ and SendGrid is a great way to go about distributing work and increasing the scalability of an application. That they're using Go to do so makes it all that much better. Thanks for the inclusion, Gengo.

About Gengo 

Gengo helps businesses and individuals go global by providing fast, high quality translation powered by the crowd. Upload copy to their website or via API and then their platform springs into action, allocating work orders among thousands of qualified translators in real-time. The platform currently draws from a network of 10,000+ pre-tested translators working across 34 languages.

About the Original Author

Shawn Smith is a software developer from Boston, Massachusetts. After graduating from Northeastern University, he moved to San Francisco to work for Rackspace and now works at Gengo in Tokyo. His favorite programming language is Go.

Monday, July 28, 2014 Increases Message Size in IronMQ

Message Size Increases from 64KB to 256KB
Large message sizes are now available within IronMQ. We have increased the message size from 64KB to 256KB.

This increase was originally in response to some uses cases around XML but also allows the service to handle almost every need possible when it comes to messaging. The increased sizes are available on Developer plans and above.

To try IronMQ for free, sign up for an account at You can also contact our sales team if you have any questions on high-volume use or on-premise installations.

Note that it's a good design practice to pass IDs in message payloads when dealing with large blocks of data such as images and files. You can put them in IronCache which has a maximum size of 1MB or something like AWS S3 and then pass the key as part of the message. 

Monday, July 21, 2014 Releases Dedicated Worker Clusters

IronWorker Now Offers Dedicated Worker Clusters
Dedicated workers clusters are now available within IronWorker. Sets of workers can be provisioned to run tasks on a dedicated basis for specific customers. The benefits include guaranteed concurrency and stricter latencies on task execution.

This capability is designed for applications that have a steady stream of mission-critical work or organizations that have greater requirements around task execution and job latency.

The IronWorker service is a multi-tenant worker system that uses task priorities to modulate the execution across millions of tasks a day. Each priority has a different targeted max time in queue. The targeted max queue times is 15 seconds for p2, two minutes for p1, and 15 minutes for p0.

The average latencies are far less than the targets (for example, even most p0 tasks run in seconds). On occasion, when under heavy load, the latencies can stretch to these windows and beyond. If low latencies are critical or if usage patterns warrant allocating specific sets of worker resources, then we suggest looking at one or more clusters of dedicated workers. 

The way they work is that a set amount of dedicated workers can be allocated on a monthly basis. Additional capacity can be turned on an on-demand basis as needed (usually on a day-by-day basis – with advanced notice or without). 

Clusters can be in units of 25 workers starting at 100 workers. On-demand allocations are also typically provisioned in units of 25 although this can be adjusted as necessary. 

A Few Use Cases for Dedicated Workers

Here are just a few use cases for dedicated workers.

Push Notifications

Push Notifications
A number of media sites are using dedicated workers to send out push notifications for fast-breaking news and entertainment. These media properties have allocated a specific number of dedicated workers giving them guaranteed concurrency to handle the steady flow of notifications. They augment the set number by adding on-demand clusters in anticipation of large events or when breaking news hits. When a news event takes place, they queue up thousands of workers to run within the worker clusters. The dedicated nature of the clusters means they’re able to meet their demanding targets for timely delivery. 

Continuous Event Processing

Another use for dedicated workers is for asynchronously processing a continual set of events. This can be in the form of offloading tasks from the main app response loop so as to gain by concurrent execution of processes and reduce the response time to users. Several customers for example use IronWorker as part of a location check-in process. Each event might trigger several related actions such as sending posts to Twitter or Facebook or, in the case of one customer, kicking off processes that bring back real-time location-based recommendations. 
Continuous Event Processing

Another example might involves sensors and other devices for Internet of Things applications. A continual set of data inputs get streamed to a message queue and then workers either perform mass inserts into a datastore or process them on-the-fly to create aggregate and derivative values in near real-time. 

In these cases, it can make sense to use dedicated clusters. Even though standard IronWorker tasks will generally meet the performance requirements, dedicated clusters can provide added assurances that tasks will execute at a continual pace and with finer latency constraints.


Getting Access to Dedicated Worker Clusters

Making use of dedicated workers is as simple as passing in a dedicated cluster option when you queue a task. When tasks run, they'll be processed within the dedicated cluster.

To get access to dedicated worker clusters, check in with our sales team and we'll get you up and running in no time.

What are you waiting for? High-scale processing awaits.

To learn more about what you can do with a worker system and see more details on the use cases above, check out this article on top uses of IronWorker.

To try IronWorker for free, sign up for an account at