Search This Blog


Monday, August 25, 2014 adds as an endpoint for message queueing + = Integration Goodness
IronMQ has just been added as an endpoint for message queueing within provides a single gateway that lets developers, marketers, and analysts pipe logs, metrics, user actions, and other key app data to hundreds of other applications and tools.

The API acts as a control for other analytics tools, allowing developers to integrate one single API in place of many.

Using's API lets an engineer implement tracking once within an application and then it will automatically translate/implement every third-party tag via their gateway. The net effect is that a developer or marketer can just push an "on" button to enable a new tool. + Message Queueing

There are a number of key advantages for using message queuing in combination with For example, in some situations, the data streams or webhook requests coming out of the Segment API can overload systems on the receiving end.

In these cases, IronMQ is a great solution because it can act as a buffer. It also keeps events in FIFO, has a storage duration of up to 30 days, and provides developers with the ability to build out custom workflows.

We'll be going deeper into the subject in subsequent posts and dev center articles but here are just a few of the use cases why using a message queue as a endpoint can make a lot of sense.

•  Buffer and persist data for processing at a later time.
•  Extract, translate, and load data into tools such as AWS's Redshift, Salesforce, and other custom tools.
•  Process data on-the-fly asynchronously using IronWorker.

To learn more about how message queues can help increase system reliability and handle data spikes, check out this article on top uses of a message queue.

To integrate IronMQ into, head to their Integration Center for simple instructions that will get you up and running in minutes.

Tuesday, August 19, 2014

How One Developer Serves Millions of Beers: Untappd +

Untappd – The Mobile App for Beer Lovers
Untappd provides a mobile check-in application for beer lovers. Their application has been downloaded by over a million users and on any given night, they can register over 300,000 individual checkins and social sharing events.

The Untappd application lets users record their beer selections, share their likes with friends, win rewards, get recommendations, and participate in a shared passion of beer with others around the world. A solid design, a fun set of features, and a responsive application are just a few of the reasons they’re one of the fasting growing entertainment-based social communities.

What’s even more impressive about Untappd is that it’s just a two-person company – a designer and a developer, both of whom have other jobs and are doing Untappd on the side. This is a story on how they got started and the work that goes on behind the scenes to power their success.

The Untappd Story

Greg Avola and Tim Mather met over Twitter six years ago when Greg was looking for a collaborator for a Twitter mobile app. They ended up working together on the app and then proceeded to take on several other projects as a designer/developer combination. In early summer of 2010, Tim came up with the idea for a check-in system for beer drinkers. The idea mapped well with Greg’s interest in beer and so they quickly created a mobile app and got to market by the fall.

The Untappd app has evolved a lot since the early days but the main premise is the same – users check-in at locations and check-in on the beers they’re drinking. For each check-in, they become eligible to win badges and receive promotions. They can also get real-time recommendations for beers based on their location.

The team works closely with breweries and beer venues to increase the connections that users have with their favorites beers. They help breweries and other partners create badges and other promotional elements for beers and events. The badges are hugely popular and are posted and shared widely within the app and across social media.

Checking In a Beer
Given how important up-to-date information about beers is, they’ve created what will soon become one of the largest open-source databases on beers in the world. It’s moderated by over 40 volunteers who help clean up information and de-dup entries. They offer free API access for developers and have the ultimate goal of making it the most widely used libraries about beer.

Registered users top over a million and they service over 300,000 check-ins on weekend nights and have processed over 50M events (the majority of them using Users love the Untappd app and use it to keep track of their beer, discover new favorites, meet new people, and find new places of interest.

The Untappd app is a model that works – a fanatical user base, an app that provides rewards, hundreds of happy partners, and an almost limitless opportunity.

Behind the Scenes of the App

The app framework for Untappd is that of a mobile client, a set of app servers connected to databases, and a large async/background processing component. They make use of a LAMP stack with PHP serving as their primary language. They use MySQL as their primary database for transactions, MongoDB for their recommendation engine and activity feeds, Redis to store all the counts for beer/user/brewery/venue, and for their background processing and as their mobile compute engine.

Recent Beers
When users check in to Untappd, there are a number of transactional events that take place. The user account gets updated and the check-in gets posted to Twitter, Facebook, and/or FourSquare. If a photo is uploaded, it gets processed. Check-in parameters get filtered for location and venue and then piped it into their MongoDB clusters that power their local recommendation capability. All in all, there can be up to 10 different events taking place for each location or beer check-in.


Initially, the check-in processes were being handled as a large batch job after hours at night. Because actions were being posted well after the actual event, the check-in process obviously wasn’t as responsive enough for their users as they needed. The Untappd team then moved these actions to the check-in response loop. That lasted for a little while as it resulted in a more responsive check-in but it quickly showed signs of strain. On heavy nights, the Untappd main app servers would start to melt because they were being used to process all the actions for each check-in, in addition to serving pages and providing query responses.

This tightly coupled serial approach also resulted in users having to wait for each process to start and finish in sequence. The delayed response times began having noticeable impacts on engagement. It was taking much longer to check-in as the app wouldn’t return for up to many seconds at a time. Users were getting frustrated and so they were not checking in for the second beer or the third.

Serial Processing Events at Check In = Slow Response Times

The general experience was also not feeling real-time enough for users because they wouldn’t see tweets until much later, and the information they were receiving from the app after a check-in was not as relevant as they might expect. Recommendations for other beers, for example, were out of date because the database wasn’t getting new beer inserts in a timely manner, and notifications of nearby trending places were not being sent out quickly enough to be relevant.

To keep user engagement high and their user base growing, they needed find a solution to their check-in problem. They turned to to do so.


To make their application more responsive and scalable, Untappd move their event processing to as a combination of IronMQ and IronWorker. Each check-in event is sent to a queue within IronMQ and then routed to workers within IronWorker. The processing runs outside the user response loop, which speeds up check-ins as well as provides the ability to handle any and all spikes in traffic.

Trending Events
Using, Untappd has been able to reduce the time to average check-in time from over 7 seconds to 500ms. They’ve also eliminated the need to manage infrastructure for this part of their app and given themselves an almost unlimited ability to scale their processing.

Continual Event Processing

The way the event flow works is that they put a check-in event onto a single queue and then that fans out to multiple queues – with each sub-queue controlling a different action, such as posting to social media or updating the recommendation engine. Multiple workers spin up soon after a check-in happens and so by the time the user has laid down their phone and sampled their beer, every action is either in process or has completed.

Wednesday, August 6, 2014

Go, IronWorker, and SendGrid at Gengo (a repost)

Shawn Smith from Gengo recently wrote a post on their use of Go, the programming language that we also use at for our backend services. (Gengo is a popular people-powered translation platform based in Tokyo and San Mateo.)

The post discusses several of the apps where they're using Go including a deployment app and several conversions of PHP and Python apps. The one that caught our attention is their use of Go with IronMQ and SendGrid to send out large volumes of emails to their user base.

Here's an excerpt from the post:
We created a service to send emails that uses‘s queue service, IronMQ. We call this the Email Consumer, which pulls JSON payloads off of a queue before rendering and sending the email that matches the ID in the payload.  
[When] a new customer signs up on our website, our web application puts a JSON payload with an ID onto the queue. The Email Consumer consumes the payload and looks up the email subject for the email with the given ID, also rendering the template with the given data. In this case, the data is simply the user’s name. It also localizes the strings in the template based on the language code provided (in this case, it’s Japanese) and sends the email via SendGrid, welcoming the customer to Gengo.

The lines that we like come in the summary of the app:
"Moving all of our emails into one place helps us easily make consistent style and copy edits. We send over 50 different emails to customers and translators through the Email Consumer, and to date it has sent over 500,000 emails without a problem."
– Shawn Smith, Go Developer at Gengo

Doing work in the background using IronMQ and SendGrid is a great way to go about distributing work and increasing the scalability of an application. That they're using Go to do so makes it all that much better. Thanks for the inclusion, Gengo.

About Gengo 

Gengo helps businesses and individuals go global by providing fast, high quality translation powered by the crowd. Upload copy to their website or via API and then their platform springs into action, allocating work orders among thousands of qualified translators in real-time. The platform currently draws from a network of 10,000+ pre-tested translators working across 34 languages.

About the Original Author

Shawn Smith is a software developer from Boston, Massachusetts. After graduating from Northeastern University, he moved to San Francisco to work for Rackspace and now works at Gengo in Tokyo. His favorite programming language is Go.

Monday, July 28, 2014 Increases Message Size in IronMQ

Message Size Increases from 64KB to 256KB
Large message sizes are now available within IronMQ. We have increased the message size from 64KB to 256KB.

This increase was originally in response to some uses cases around XML but also allows the service to handle almost every need possible when it comes to messaging. The increased sizes are available on Developer plans and above.

To try IronMQ for free, sign up for an account at You can also contact our sales team if you have any questions on high-volume use or on-premise installations.

Note that it's a good design practice to pass IDs in message payloads when dealing with large blocks of data such as images and files. You can put them in IronCache which has a maximum size of 1MB or something like AWS S3 and then pass the key as part of the message. 

Monday, July 21, 2014 Releases Dedicated Worker Clusters

IronWorker Now Offers Dedicated Worker Clusters
Dedicated workers clusters are now available within IronWorker. Sets of workers can be provisioned to run tasks on a dedicated basis for specific customers. The benefits include guaranteed concurrency and stricter latencies on task execution.

This capability is designed for applications that have a steady stream of mission-critical work or organizations that have greater requirements around task execution and job latency.

The IronWorker service is a multi-tenant worker system that uses task priorities to modulate the execution across millions of tasks a day. Each priority has a different targeted max time in queue. The targeted max queue times is 15 seconds for p2, two minutes for p1, and 15 minutes for p0.

The average latencies are far less than the targets (for example, even most p0 tasks run in seconds). On occasion, when under heavy load, the latencies can stretch to these windows and beyond. If low latencies are critical or if usage patterns warrant allocating specific sets of worker resources, then we suggest looking at one or more clusters of dedicated workers. 

The way they work is that a set amount of dedicated workers can be allocated on a monthly basis. Additional capacity can be turned on an on-demand basis as needed (usually on a day-by-day basis – with advanced notice or without). 

Clusters can be in units of 25 workers starting at 100 workers. On-demand allocations are also typically provisioned in units of 25 although this can be adjusted as necessary. 

A Few Use Cases for Dedicated Workers

Here are just a few use cases for dedicated workers.

Push Notifications

Push Notifications
A number of media sites are using dedicated workers to send out push notifications for fast-breaking news and entertainment. These media properties have allocated a specific number of dedicated workers giving them guaranteed concurrency to handle the steady flow of notifications. They augment the set number by adding on-demand clusters in anticipation of large events or when breaking news hits. When a news event takes place, they queue up thousands of workers to run within the worker clusters. The dedicated nature of the clusters means they’re able to meet their demanding targets for timely delivery. 

Continuous Event Processing

Another use for dedicated workers is for asynchronously processing a continual set of events. This can be in the form of offloading tasks from the main app response loop so as to gain by concurrent execution of processes and reduce the response time to users. Several customers for example use IronWorker as part of a location check-in process. Each event might trigger several related actions such as sending posts to Twitter or Facebook or, in the case of one customer, kicking off processes that bring back real-time location-based recommendations. 
Continuous Event Processing

Another example might involves sensors and other devices for Internet of Things applications. A continual set of data inputs get streamed to a message queue and then workers either perform mass inserts into a datastore or process them on-the-fly to create aggregate and derivative values in near real-time. 

In these cases, it can make sense to use dedicated clusters. Even though standard IronWorker tasks will generally meet the performance requirements, dedicated clusters can provide added assurances that tasks will execute at a continual pace and with finer latency constraints.


Getting Access to Dedicated Worker Clusters

Making use of dedicated workers is as simple as passing in a dedicated cluster option when you queue a task. When tasks run, they'll be processed within the dedicated cluster.

To get access to dedicated worker clusters, check in with our sales team and we'll get you up and running in no time.

What are you waiting for? High-scale processing awaits.

To learn more about what you can do with a worker system and see more details on the use cases above, check out this article on top uses of IronWorker.

To try IronWorker for free, sign up for an account at

Wednesday, July 16, 2014 Releases High Memory Workers

IronWorker Can Now Handle Larger Tasks
We're pleased to announce the availability of high-memory workers within IronWorker. This new capability will provide greater processing power to tackle even more ambitious tasks.

The added boost to the IronWorker environment is perfect for tasks that consume large amounts of computing resources – tasks that might include big data processing, scientific analysis, image and document processing, and audio and video encoding.

A High-Performance Worker System Gets Better

Workers systems are key for scaling applications and building distributed systems – whether it’s handling tasks that get sent to the background, pre-processing data to speed up load times, scaling-out processing across large segments of data, or consuming streams of continuous events. A good worker system can handle a variety of tasks and worker patterns and address the majority of work.

A certain number of tasks, however, might not fit the typical worker system and therefore might need isolated setups with specific machine configurations. Examples might include processing large media files, doing computational analysis over large sets of data, or running other jobs that require greater machine resources, complicated code packages, or dedicated resources.
Image Processing

Memory issues can be elusive to address, especially in a worker system. Depending on language, when a worker runs out of memory, it can do some strange stuff such as timeout (Node), segfault (Ruby 1.9), or just die without much indication (Ruby 2.1).

High-memory workers extend IronWorker's current capabilities so that you can pass it a greater set of application workloads. The early use cases we're seeing are for image and audio processing but you can use it for just about anything where larger in-memory resources will be a benefit.

More Memory and Faster Networking Speeds

Audio Processing
The standard worker configuration provides 330 MB of memory and enough CPU power for almost all general application tasks. (This is especially true if work is split up across a number of workers and various worker patterns are employed such as master-slave or task-chaining.)

The high-memory worker configuration provides 1.5 GB+ RAM which translates into much more in-memory processing and little to no storage swapping. The high-memory workers also provide faster I/O and networking capabilities which means faster job execution, faster file transfers, and faster run times.

Getting Started with High-Memory Workers

Using our high-memory clusters is as easy as passing in a hi-mem cluster option when you queue a task. When tasks run, they'll be processed within a high-memory cluster of runners. (The feature is just starting to roll out into production so we'll need to enable your account for access.)

To get started with high-memory workers, check in with our sales team and we'll get you up and running in no time.

What are you waiting for? High-memory awaits.

To learn more about what you can do with a worker system, check out this article on top uses of IronWorker.

To try IronWorker for free, sign up for an account at

Tuesday, July 15, 2014 Adds Derek Collison as an Advisor – Former SVP/Chief Architect at Tibco, Architect of Cloud Foundry

We're happy to announce that we recently added Derek Collison to the advisory board. Derek is Founder/CEO of Apcera and an industry veteran and pioneer in large-scale distributed systems and enterprise computing.  He has held executive positions at Google, VMware, and TIBCO Software and so is a great resource for technical and business insight.

Derek Collison Advisor
Since starting Apcera, Derek has been dedicated to delivering composable technology for modern enterprises to innovate faster. He was previously CTO at VMware where he designed and architected the industry's first open PaaS, Cloud Foundry. Prior to that, he was one of two Technical Directors at Google where he co-founded the AJAX APIs group. He also spent over 10 years at TIBCO Software, where he designed and implemented a wide range of messaging products, including Rendezvous and EMS. As TIBCO’s SVP and Chief Architect, Derek led technical and strategic product direction and delivery.

Aside from the wealth of knowledge and experience in the tech industry, Derek is also a big proponent of Go and speaks often on its use within cloud systems. (Here's a slide presentation of his from the recent Gophercon conference.) We're also big users of Go and have written several articles on the subject (you can find them here and here) and so it's a great meeting of like minds.

Our goal has always been to provide the best cloud infrastructure services for message queuing and task processing. As we grow the company and expand our offerings into larger organizations, we're grateful to have not only have a strong team but also a solid set of investors and advisors to help guide the way.

As someone who has done much in technology and in high-performance cloud systems, Derek Collison is a great addition to our advisory board and we couldn't be more pleased to have him as a part of our team.

Monday, June 30, 2014

Introducing IronMQ Enterprise

Cloud-Native Message Queuing Now Available On-Premise

Message Queuing for Public, Private, and Hybrid Clouds
Today we’re excited to release IronMQ Enterprise, our cloud-native message queuing technology for large enterprise and carrier-grade workloads.

IronMQ Enterprise features a more advanced backend persistence layer, improved message throughput and latency, and, for the first time, deployment options for on-premise, private and hybrid cloud configurations. With IronMQ, enterprises can move the power of the cloud behind the firewall.

Since 2011 has handled billions of messages each month for production scale customers like AAA, Mentor Graphics, Code for America, Bleacher Report, YouNow, and Hotel Tonight. Our largest customers transmit millions of messages and process tens of thousands of compute hours daily, making one of the largest cloud message queuing and asyc processing platforms available today.

Building on this experience has allowed us to improve on previous generations of messaging solutions. IronMQ Enterprise is the result of a major upgrade in the backend persistence layer by using a highly performant embeddable key/value database. The new release also benefits from streamlined authentication access and an improved API. In addition, IronMQ Enterprise eliminates outside dependencies, creating an exceptionally tight, carrier-grade messaging solution than can be quickly deployed in horizontally scalable configurations across multiple zones and regions.

Critical Cloud Messaging Features

IronMQ delivers advanced messaging options including push queues and long-polling along with simple and secure access using HTTP/REST APIs and OAuth authentication. Built to be distributed across multiple availability zones and geographic regions, IronMQ provides reliable message delivery and fault tolerance through message persistence, redundant systems, and automated failover. Additional features include advanced real-time monitoring, message retries, error queues, and queue triggers. IronMQ Enterprise also provides improved gateways for supporting other messaging protocols including Amazon’s SQS messaging protocol and OpenStack’s Marconi project, increasing cloud compatibility and interoperability and reducing vendor lock-in.

Public, Private, and Hybrid Clouds

Introducing IronMQ’s on-premise availability is a huge step for us as a company and technology, making the only message queue provider with high-availability services for public, private, and hybrid clouds. IronMQ Enterprise now gives organizations more options when creating more responsive, scalable, and fault-tolerant systems using readily accessible cloud technologies. Single datacenter, multi-datacenter, and carrier grade options are available and can include managed hosting and 24/7 global support options.

Download and Availability 
IronMQ Enterprise is written in Go and a single server evaluation version can be installed from binary files or as a Docker image. Go to the IronMQ Enterprise for access:

The IronMQ Enterprise single server evaluation version is free to use, installs in minutes, and provides message persistence, multi-tenancy, one-time guaranteed delivery, and the same features, capabilities, and access methods that can be found in the public cloud version of IronMQ.

For more information on pricing of high availability multi-cluster versions, please contact us at

For More Technical Details
For more information on the innovations in this release and the core message queuing engine, please check out our post here.

IronMQ Enterprise: Powered by IronMQ v3

Delivering Improved Performance + On-Premise and Hybrid Cloud Options

Today we announced IronMQ Enterprise, a set of offerings that includes more flexible configuration options including deployment on-premise and within private clouds as well as improved performance around message throughput and latency.

At the heart of this release is IronMQ v3, our latest version of IronMQ. A lot of work went on behind the scenes to improve the core messaging engine and so we wanted to use this post to give you some of the details on the efforts by the team.

A good part of the effort on IronMQ v3 was focused on improving the backend persistence layer. Persistence is a key requirement within any production-scale messaging solution. If a server or the message queue goes down, you don’t want messages to be lost. With some message queues, however, especially open source versions, message persistence has to be configured at some considerable cost, and can entail some serious performance hits.

IronMQ offers persistence by design – meaning we don’t offer it in non-persistent form. As a result, the persistent storage layer receives a lot of attention and is something we make significant efforts to get right. In addition to the work on persistence, other efforts that went into IronMQ v3 included changes to the authentication layer as well as improvements to our APIs. All this work resulted in solid performance improvements along with greater ease of deployment and operation. At the end we share some of the performance comparisons against RabbitMQ.

Improved Backend Persistence Layer

IronMQ v3 moves to a modular format that can make use of embeddable key/value databases for the backend persistence layer, replacing the prior version that had been based on a NoSQL database implementation. The evaluation version uses RocksDB, an embeddable open-source key/value database that is a fork of LevelDB. The move to this type of key/value database provides a persistence layer that is far better suited to the needs of message queuing and the deployment needs of distributed cloud technologies. A few of these benefits include:

Read/Write Optimizations

A Key/Value DB is
Better Suited for MQs
The read-write patterns for a queue are different than most other transactional or data storage use cases. The most common queue pattern is generally write once, read once, and then delete. While additional metadata gets stored and additional accesses go on behind the scenes, the pattern of messages in and messages out, does mean a continual recycling of data around limited durations. A key/value database handles frequent deletions more gracefully than our prior NoSQL solution, performing cleanup in multiple background threads thereby giving live traffic little to no performance degradation. The key/value database also uses lookup optimization to reduce the time for most “get” operations and is further optimized in conjunction with a large in-memory cache.

Locking Optimizations

Improved Read/Writes
Reduce Locks
The IronMQ v3’s database layer is logically partitioned into a read-only path, a read-write path plus its write patterns scale concurrently across queues. The separate paths and concurrent writes drastically reduces the number of locks on write operations that are critical for supporting high throughput transactions. Whereas the database guarantees atomicity and durability, we added our own level of granularity on top of the database to provide locking for guarantees such as FIFO, one-time message delivery, and other consistent operations. This means we can optimize our locks on a per queue basis, thereby taking advantage of the write concurrency to avoid unnecessary locks on unrelated queues.

Storage Optimizations

An Optimized DB Means
Faster Data Access
Multicore servers and new storage options such as SSD flash drives are allowing storage-IOPS on the order of millions of requests per second. Database driver software that can make use of the IOPS offered by flash storage can perform much faster than unoptimized DBs across random read, write, and bulk uploads. The switch to a more modern key/value database provides greater upside for gains on write workload, bulk uploads, as well as pure random read workload. Multi-threaded compaction processes can also provide gains over single-threaded processes IO-bound workloads, translating into fewer write-stalls and more consistent latency.

Streamlined Authentication Access 

An Embedded AuthDB
Reduces Latency v3 also contains streamlined authentication. IronMQ is a multi-tenant message queue that uses HTTP/REST-protocols and OAuth authentication. API operations get authenticated, which means there is overhead to either the AuthDB or cache for every operation. The same improvements to the backend persistence layer described above were also made for the auth componentry within IronMQ v3. Namely the prior database was replaced with an efficient key/value store which means that it also inherits the same performance characteristics, which means not just reduced auth overhead but also greater consistency in response times.The change within the auth access layer also includes a more modular architecture which provides an easier ability to support additional non-OAuth access methods such as PKI signatures.


One thing to highlight here is that IronMQ Enterprise uses HTTP as the transport protocol for connecting to the service. This is in contrast to RabbitMQ which uses AMQP. We believe there are distinct advantages in using HTTP. One of the more pressing reasons to favor HTTP over AMQP is that AMQP is a separate application layer protocol than the one developers are used to using plus it has a significant amount of complexity to it. Everyone can easily speak HTTP, but it takes special effort to speak AMQP. Another distinction is that certain cloud application hosts don’t allow socket connections to and from their virtual environments but they do allow HTTP requests. Additionally HTTP and HTTPS are always open on most enterprise firewalls, but special ports for AMQP may not be. You can read more of our reasons to favor HTTP over AMQP here.

APIs Modifications | Support for Other Messaging Protocols

 Improved HTTP/REST
API Consistency
The process of upgrading to IronMQ v3 also allowed us to make changes to the IronMQ APIs to accommodate learnings from users and their use of a cloud-native message queue. The HTTP/REST calls didn’t change drastically but we were able to address a few idiosyncrasies of the prior API structure as well as introduce greater consistency within the command set.

One of the changes to the API include changing the IronMQ ‘get’ operation from an HTTP GET to an HTTP POST operation to be consistent with REST conventions. Another change is the introduction of a reservation ID when getting a message that must be used upon deleting a message so as to avoid race conditions (which can occur when a message times out but is still in use by the client).

Developers should not see any real difference in the APIs as most users make use of client libraries to interface with the service. (The client libraries add an abstraction layer to the APIs to make it easier to use the service within a specific language or app framework.) The use of IronMQ v3 does require the use of v3 specific client libraries, which you can find available on GitHub.

More Flexible Protocol Support

Improved API Gateways
Enhance Protocol Support

Also included alongside the API modifications is the inclusion of gateways to support other protocols such as AWS SQS protocol and OpenStack Marconi protocol. SQS is a popular message queuing service provide by Amazon that uses proprietary approaches for authentication and API commands. Marconi is an cloud messaging service project within OpenStack. ( participated and assisted in developing the specification for. IronMQ is compatible with almost all the features of both services. The more flexible gateway in IronMQ v3 allows easier support these two messaging protocols.

IronMQ v3 – A Better Cloud MQ

Our development team put in a hard work into improving IronMQ and releasing IronMQ Enterprise. This work has paid off with solid performance gains, simpler componentry, and elimination of outside dependencies. The result is a fully featured but very tight high-performance messaging solution that can be more easily deployed in high availability configurations across clouds.

Downloading IronMQ v3

IronMQ is written in Go. A single server evaluation version can be installed from binary files or as a Docker image from here:

The IronMQ single server evaluation version is free to use, installs in minutes, and provides message persistence, multi-tenancy, one-time guaranteed delivery, and the same features, capabilities, and access methods that can be found in the public cloud version of IronMQ.

Performance Measurements

IronMQ vs RabbitMQ
The work described above has resulted in significantly higher throughput for writes as well as performance increases for reads over prior versions of IronMQ.

Below are performance tests for IronMQ and RabbitMQ. We chose RabbitMQ as a base comparison given how popular and well-established RabbitMQ is. The results are indicative of the performance characteristics of IronMQ v3.

A total of 5 tests were performed. The first 4 tests were performed in “transient” mode for RabbitMQ, meaning no message persistence – all messages exist only in memory. The last test was performed in “persistent” mode for RabbitMQ. Note that performance drops significantly in this mode. IronMQ is always persistent (meaning that messages are persisted to disk and there cannot be lost in the event of a server or mq crash).

Test Specifications

The code for the test suite can be found at:

  • Both RabbitMQ and IronMQ were run on separate AWS m3.2xlarge boxes with the following specs:
    • 8 vCPU clocked at 2.5GHz
    • 30 GB RAM
    • 2 x 80 GB SSD storage
  • Databases for each MQ were cleared before each benchmark
  • Producers and consumers ran on a single AWS m1.small box in the same datacenter
  • Each message body was a 639-character phrase

  • 4 tests were performed in “transient” mode for RabbitMQ (no persistence)
  • 1 test was performed in “persistent” mode for RabbitMQ (resulting in performance hits)

Processing 1 Message at a Time (1 message per API request)

Processing 100 Messages per at a Time (100 messages per API request)

Processing 100 Messages at a Time w/Single Consumer/Producer

Processing 10,000 Messages per Queue, 100 Queues, 1 Message at a Time, 

Processing 1 Message at a Time with Persistence Turned on for RabbitMQ

Future Benchmarks

We’re doing more benchmarks against other MQs and configurations and so stay tuned. For now, though, we’re celebrating a bit and then we’re getting back to work. There’s a lot more performance we believe we can squeeze out.


Wednesday, June 25, 2014

Getting Pushy with Symfony2! (guest post)

This is a guest post from Keith Kirk who is VP Engineering at Underground Elephant. 

Message queues are not a new concept – neither is Push Notifications and surely not HTTP Posts for that matter. However, when you combine these ideas you have a very flexible queueing system.

Sprinkle in a little bit of Symfony's EventDispatcher... suddenly your Symfony application starts to feel a whole lot more responsive. And that feels great.

The Lead Up...
Underground Elephant is an online performance marketing company, and provides customer acquisition software for its clients. We're constantly weighing the benefits of hosting our own services vs using cloud-based alternatives. More often than not, in our fast paced environment - we have bigger fish to fry then dealing with administrating, scaling, and maintaining simple services that, while are important, are not core to our business.

So, when it came to finding ways to increase the responsiveness of our Symfony2 applications and move non-essential processes to the background, we started exploring job queues and messaging.

Admittedly, spinning up an instance of RabbitMQ or ØMQ ain't hard, neither is writing a script that will keep a socket open waiting for new messages.

I guess, frankly, we just didn't want to do it. It was another moving part that my teams had to maintain outside of their applications and more dependencies and infrastructure in our deployment.

Queue Your Cake and Eat it Asynchronously
What I wanted was to keep my dependencies light and in code – maintained by composer. I wanted a way to write my worker code as simple services within my Symfony application - all maintained within the same repository and deployed just as easily.

I wanted persistence in my queue and a level of failover in case of errors. I wanted hands off scaling and distribution - and I really wanted someone else to manage it.

So, I wanted a lot, actually.

Enter the Symfony2 QPush Bundle, integrated with IronMQ,'s cloud message queue as a service.

The bundle integrates push queue providers directly into your Symfony application allowing you to create and manage multiple queues. Subscribers are configurable, allowing differing and/or multiple subscribers per queue.

You can publish messages easily...

The bundle leverages the EventDispatcher to dispatch a MessageEvent when your published messages are received from your queue. Your services are called automatically based on simple tagging which gives a lot of flexibility in chaining services or handling multiple queues in a single service.

Because it utilizes simple services in Symfony, its very easy to reuse your existing code. For us, this made adoption incredibly easy.

Handle Events in Your Services

Check out the bundle and the documentation on ReadTheDocs for more information on how to incorporate this into your application.

Push Queues are Awesome!
Push Queues may not appeal to everyone or fit each use case, but there is a ton of upside to them.

You know that queue you have that's not heavily utilized, but processing each message as soon as possible is incredibly important? Yeah, the one you're polling on the same 5 second interval at 3am that you are at the heaviest time of day. By pushing notifications directly to you application, you can remove that wasted compute, the wasted api calls, and wasted money.

No more daemon, no more cron.

For PHP specifically, threading has always been a sore spot (read, "non-existent"). However, with Push Queues, you can utilize your web server (Apache, Nginx, etc) to handle the threading for you. This also means you can easily scale horizontally by either registering more subscribers or utilizing a cluster of web servers behind a load balancer.

Wrapping Up
The QPush Bundle is open source and openly available to use. We're also very welcome to receiving contributions and feedback. If you have any questions, please visit us on GitHub!

About the Author
Keith Kirk is VP Engineering at Underground Elephant. Underground Elephant delivers enterprise marketing software solutions. They were founded in 2008 and are headquartered 
in San Diego, California.