Search This Blog


Monday, April 14, 2014 Joins Red Hat’s OpenShift Marketplace

The OpenShift Marketplace is pleased to announce its IronMQ and IronWorker solutions are part of the OpenShift Marketplace. The marketplace is an important component to Red Hat’s OpenShift Platform-as-a-Service (PaaS) in that it lets developers combine the benefits of enterprise PaaS with tightly integrated, complementary solutions – all without losing time on technology integration. provides an enterprise grade message queuing and task processing platform. It makes it easy to connect systems, handle event processing, and perform highly concurrent task processing. IronMQ is a message queue written in Go that provides full features such as push queues, error queues, retries, and alerts along with high availability, guaranteed one-time delivery, message persistence, and multi-zone redundancy.

IronWorker is a scalable task queue / worker service that provides an easy and reliable way to process workloads asynchronously across thousands of cores. Tasks can be queued directly from web and mobile apps, triggered via webhooks, scheduled to run later, or run continuously in the background. IronWorker supports all common languages including binary packages, offers task retries, and sophisticated task monitoring.

“The OpenShift platform is an important development for The combination of Red Hat’s public cloud and on-premise options along with this new marketplace is a powerful offering and directly in line with our deployment options. Deploying our platform as a cartridge leverages the flexibility and security of OpenShift to give developers powerful ways to build applications.”
– Chad Arimura, CEO,

More on Red Hat’s OpenShift Marketplace  

As more developers use enterprise PaaS for an increasing array of applications, a key to their success is a vibrant and robust partner ecosystem. OpenShift’s current partner ecosystem uses the OpenShift Cartridge specification method to link key technologies and services into applications built on OpenShift, giving customers access to a variety of offerings from cloud industry leaders. The OpenShift Marketplace provides an ideal platform to reach customers and developers in one easy, interactive location, enabling ISVs to showcase their solutions and marketing material, and complete transactions in a few easy steps. 
Read more >> at the Red Hat Summit in San Francisco will be at the Red Hat Summit April 14-17th in the Partner Pavilion. The summit offers more than 150 breakout sessions and hands-on labs and they expect attendees ranging from developers, cloud enthusiasts, and enterprise architects to program managers and CxOs. If you’ll be there and want to talk distributed systems, message queuing, OpenShift, OpenStack, and a host of other topics, please stop by our booth. We’ll be right by the developers lounge.

Wednesday, April 9, 2014

How Edeva Uses MongoDB and IronMQ to Power Real-time Intelligent Traffic Systems

Edeva AB develops and markets intelligent traffic systems. Their product is a dynamic speed bump called Actibump. The selective system makes it possible to handle the compromise in accessibility, traffic flow and speed control, which is not possible with static speed bumps.

Actibump consists of one or more road modules that are mounted into a cast foundation and a radar unit that transmits information to a central control system. The road modules raise and lower in response to vehicle speed and are controlled and monitored over the Internet. Actibump can be expanded with variable speed limits, automatic traffic control (ATC), transponder systems for emergency vehicles, and real-time background data capture for statistical analysis of traffic.

From Road Trials to Production

An Actibump in Sweden
Actibump has been in operation in Linköping, Sweden in 2010 at a location at which unprotected road users cross a street used by cars and public transport. The speed of a vehicle approaching the Actibump is recorded with the aid of radar measurement or induction loops. No action is taken if the vehicle speed is within the legal limit, but if it is approaching faster than is permitted, a part of the roadway pivots downwards, creating an edge. Driving over this edge causes discomfort for the driver encouraging them to reduce speed before reaching the obstacle.)

Here's a video of an Actibump in action:

How Actibump Works
Vehicle speeds have been actively monitored before and after Actibump has been installed. The percentage of vehicles exceeding the speed limit in the three years since installation has dropped from 70 percent to 20 percent, demonstrating Actibump’s proven long-term beneficial effects. The results also show that at least 95% of passing traffic keeps to the speed limit when passing Actibump. This leads to significantly increased safety for unprotected road users.

Reducing Speeds on the Øresund Bridge

In December 2013, Edeva AB won a public procurement for variable speed impediments to the Øresund Bridge. The bridge is a double-track railway and dual carriageway bridge-tunnel across the Øresund strait between Sweden and Denmark.

Øresund Bridge between
Sweden and Denmark
The bridge runs nearly 8 kilometers (5 miles) from the Swedish coast to the artificial island of Peberholm, which lies in the middle of the strait. (The remainder of the link is by a 4 km tunnel from Peberholm to the Danish island of Amager.) Actibump will be installed in four of the eleven lanes at the toll station for Sweden for Denmark-bound traffic, and will help make the work environment safe for bridge personnel.

About Edeva AB

Edeva is led by David Eskilsson and is a spinoff from Prodelox, a leading product development company, and a member of  the LEAD business incubator in Linköping.  In response to the positive evidence of success and the visibility gained from the Øresund Bridge installation, Edeva expects to more significantly expand their opportunities in Europe during 2014.

Edeva’s Actibump solves a worldwide problem, to handle the compromise between traffic safety, accessibility, traffic flow, and work environment in public transport and do so in an intelligent and mindful manner. Edeva uses MongoDB and's IronMQ as core pieces within their infrastructure to  collect and process real-time data and create more intelligent traffic systems. Edeva’s combination of mechanical engineering and advanced data collection and processing demonstrates that the Internet of Things is real and within easy reach.

Details on How the Actibump Works

Choosing a Central Database (MongoDB)

MongoDB Documents and Collections

Edeva identified the opportunity to centrally collect and analyze traffic data in real time, enabling their customers to unlock deeper operational intelligence into traffic management. The application would initially collect data on each vehicle, including maximum speed, average speed and vehicle type, but they also intended to expand this in the future to include weather conditions, congestion levels and other metrics that would improve traffic flow and the safety of road users. 

The development team initially considered using MySQL as their central database, but quickly realized that their changing data requirements would not be supported by its rigid relational data model. With the need to ingest highly variable and rapidly changing sensor data from their Actibump modules and run analytics against it in real time, the development team chose MongoDB. As a consequence, Edeva can dynamically adjust measurements and configuration, while aggregating data to customer dashboards in real time. In addition, the engineering team can quickly evolve their application to meet new customer requirements.

Sunday, March 30, 2014

Using Workers for Large Scale PHP posting with Laravel (a repost from Scott Wilcox)

Scalable Workers in Laravel
One of the fun parts about working at is being able to connect with developers working on some great projects. 

One of these is Scott Wilcox, the developer of His service posts events to Twitter and has been growing in popular the last several months. Scott wrote a great post the other day on the growth of his service and the things he did to help scale the processing. 

Scott built his service on Laravel, a popular and growing PHP framework. He used's IronWorker service in conjunction with Laravel to power and scale the posting to Twitter. 

Below is an excerpt of his post. If you're interested in ways to build something to be able to scale easily, it's definitely worth reading.
Around October 2013 I discovered almost by accident. I'd recently begun rewriting into the excellent Laravel framework. I was testing Laravel 4's queuing systems and noticed a reference to After reading more into the IronMQ product, I came across IronWorker. 
The difference IronWorker provided for cannot be understated. It allows us the create updates, package them up to be sent and then queue en-masse into an IronWorker queue. These are then processed in batches and an entire days updates can be sent out in a matter of minutes. 
Sunday is the busiest day of the week for Regularly for a year now, we've been pushing out over 200,000 updates. That's 8,333 updates an hour or 138 a minute. This would take over 24 hours sequentially, around 18 hours with multiple curl calls and takes just over 40 minutes with IronWorker at a fraction of the cost. 
I was able to remove one of the servers and save on the hosting cost – this alone reduced our costs by half. 
The exceptional service, support and price is worth it alone. Mix that in with the fact costs were halved - I'm not too sure how you can look anywhere else when needing to run PHP workers for your large scale projects.

To read the full post, visit Scott's blog here.

To check out the platform, sign up for a free account.

To learn more about using the platform with Laravel, take a look at these resources:

Friday, March 14, 2014 Enhances Web and Mobile Capabilities with CORS Support adds CORS support is pleased to announce the introduction of support for Cross-origin Resource Sharing (CORS). This capability means that developers have even easier access to the platform from desktop and mobile browsers and other client devices.

CORS is an HTML5 feature that supports cross-domain communication across web clients and server-side domains. Without CORS, a webpage cannot access another domain, either to get data or to access a service. (Cross-domain requests are typically forbidden by web browsers under the same origin security policy.)

Why CORS Matters with the Platform

All common browsers support CORS and so adding it to the APIs means that developers can make calls to IronMQ, IronWorker, and IronCache from webpages and other JavaScripts clients in a very simple and secure manner. 

(An alternative to CORS is JSONP but that’s limited in that it is restricted to GET requests which means other methods such as POST, PUT and DELETE are not available.)

The use cases for connecting mobile and desktop clients to are pretty clear. Client-side components can put messages on IronMQ, run tasks in IronWorker, or put data in IronCache without having to hit another server (and have that server hit the APIs). This greatly streamlines app development in that you can send work to the background – whether it's to execute within your own system or to push to another service.

EnableCORS website handles the scaling and execution of the tasks or the pushing of the message to another endpoint. Developers simply have to make calls to the APIs and they get production-scale message queuing, task processing, and key/value data storage. Here's an example of how you might use CORS to connect to the platform.

How CORS Works

Specifically, CORS builds on top of the XMLHttpRequest object and allows JavaScript within a webpage or mobile app to make XMLHttpRequests to other domains. The browser adds these headers, and sometimes makes additional requests, during a CORS request on behalf of the client. The server responds in kind. 

Much of the complexity for CORS is handled by the browser and the server. As a client-side developer, you can access these new headers but are generally shielded from majority of the details on how CORS works. For deeper background, however, you can check out an article by Monsur Hossain on Using CORS. It shows how to configure clients to make cross-origin requests along with an example of an end-to-end request.

Browser Support

Here is a table of the various desktop and mobile browsers and the levels of support for CORS. 

You can view the full table of browser versions here.

Making a CORS Request for the Platform

This section shows how to make a cross-domain request in JavaScript to IronMQ. The project ID and OAuth token provide authentication.

  // Create the XHR object.
  function createCORSRequest(method, url) {
    var xhr = new XMLHttpRequest();, url, true);
    return xhr;
  // Insert your credentials here:
  var projectId = 'YOUR PROJECT ID';
  var queueName = 'YOUR QUEUE NAME';
  var token     = 'YOUR TOKEN';
  var url = '' + projectId + '/queues/' + queueName + '/messages?oauth='+ token;
  // create a request and set content-type header
  var xhr = createCORSRequest('POST', url);
  // missing content-type header will lead to a 406 response
  xhr.setRequestHeader("Content-Type", "application/json");
  // Parse the response in the format of a JSON string
  xhr.onload = function() {
    var text = xhr.responseText;
    var resp = JSON.parse(text)
    console.log("My message is: ", resp);
  xhr.onerror = function() {
    console.log('Woops, there was an error making the request.');
  // Stringify your JSON request body
  xhr.send(JSON.stringify({"messages":[{"body":"your message"}]}));

Similar approaches can be used for queuing up a worker or putting an item into a cache in except with different endpoints, worker/cache names, methods, and payloads.

Visit the Dev Center for a complete list of API methods for the services.

IronMQ API reference

To learn more about how can help you process messages, run tasks and store key/value pairs using from desktop and mobile clients, visit today.

Wednesday, March 12, 2014 Launches Custom Runtime Environments for IronWorker

Custom Runtime Language Environments in IronWorker announced today the introduction of custom language environments within its IronWorker processing platform.

Instead of a single standard environment, developers can now define runtime environments and write workers for specific language versions.

IronWorker already supports all common languages including PHP, Ruby, Python, .NET, Node.js, Java, Go and binary files, but this release adds in finer grained support for specific language versions including Ruby 2.1, Python 3.2, Mono 3.0, and others. (The full table can be seen below.)

Greater Flexibility and Increased Reliability 

IronWorker is a highly available worker service that provides background processing, scheduled jobs, and concurrent scale-out processing. Worker systems are critical within almost every production-scale cloud application. Whether it’s processing large amounts of data, processing event streams, or handling individual jobs in the background, worker systems allow applications to scale more easily and system components to operate more independently.

User-definable runtime worker environments solve a number of problems with most worker systems. Developers needing the latest language versions can now have access to them as soon as they become available. At the same time, developers with existing workers can lock down their environments to use specific versions so as to maintain consistency.

Custom language environments let serve both of these needs – allowing to offer the most current environments while still providing a reliable and stable platform for production-scale systems.

Another advantage of custom runtime environments is that developers no longer have to limit themselves to a single compute stack. They can change environments worker by worker. As a result, a single application can have workers using different languages and different versions – with zero installation and zero ops. This reduces the risk of monolithic apps where language upgrades can take months, or in some cases, years. Using a flexible and scalable worker system like IronWorker means applications can be more loosely coupled and distributed and therefore easier to scale and easier to maintain.

New IronWorker Runtime Environments 

In addition to the existing language environments, the custom language versions now available in IronWorker include:

  • Ruby 1.9
  • Ruby 2.1
  • Python 2.7
  • Python 3.2
  • PHP 5.4
  • Node 0.10
  • Java 1.7
  • Scala 2.9
  • Mono 2.10
  • Mono 3.0

Additional environments can be created relatively easily. See below for how to get in touch with us if you're interested in environments not listed here or in the default environment.

How to Make Use of Custom Runtime Environments in IronWorker

To make use of a custom worker environment in IronWorker, all you need to do to is add a single line to your .worker config file. (The .worker file is included as part of the worker upload process and tells IronWorker what the executable file is, what code packages and data files are included, whether to build the worker remotely, as well as a number of other options.)

Note: For the .worker file, a worker name needs to be included with the .worker suffix. For example, sendmail_worker.worker would create a worker in the system called "sendmail_worker".

To specify a particular custom environment for your worker, add the language/version parameter as a line in your .worker file.

To use a Ruby 2.1 environment, include the following in your .worker file:

 stack 'ruby-2.1'

To specify a Java 1.7 environment, include the following:

stack 'java-1.7'

To get list of available stacks via the CLI just type in the following command:

iron_worker stacks

You'll get back a list along the lines of the following:


Note: Make sure you update the IronWorker CLI first though.

gem update iron_worker_ng

That’s it!

Simple Addition → Powerful Results

With a simple addition to a config file, developers get custom runtime language environment running within a secure sandbox and with the same features customers love – including retries, priority settings, real-time logging, dashboard monitoring, and more.

We believe custom language environments are a huge step forward in bringing you the most powerful and flexible worker system in the world. Try it out and let us know what you think.

Behind the Scenes – Using Docker 
We're using Docker behind the scenes to create and manage the containers for the custom language environments. It provides a number of powerful capabilities including the ability to automatically assemble containers from source code and control application dependencies. We'll post an article in the near future detailing our experiences with Docker. (In short, we’re fans.) If you’d like to know more about our use of Docker, please subscribe to the newsletter.

To Get Started

To give IronWorker custom environments a try, sign up today for free account. It’s simple to get started and powerful enough for even the heaviest demands. Go here to signup.

To Create Your Own Custom Environment

If you are operating at production-scale or will be soon and need one or more custom environments, please reach out to our sales and support teams. Contact us at 1-888-939-4623 or send us details at

Wednesday, February 26, 2014 Launches on Pivotal Cloud Foundry Platform is proud to announce that IronMQ and IronWorker are now available as add-on services on Pivotal’s web-based Platform-as-a-Service (PAAS) which is available at and runs the open source Cloud Foundry platform.

Run.pivotal provides app developers with a powerful option to rapidly deploy and scale new applications. The recent launch of Pivotal CF – a commercial distribution of Cloud Foundry from Pivotal that is deployable on VMware’s vSphere IAAS platform – adds an industrial-strength option for deploying applications on cloud infrastructure, providing choice for business owners who want a combination of on-premise, cloud and hybrid application hosting solutions.

James Watters
Cloud Foundry at Pivotal
“IronMQ and IronWorker adds a proven suite of developer focused tools to the Cloud Foundry eco-system,” said James Watters, Head of Product Cloud Foundry at Pivotal. “This is a great win for developers who want to use the best of breed tools to build the next generation web and mobile apps. It augments the breadth of options available today, such as the current AMQP-based message services with additional message queueing and worker services designed for the way developers build products.”

About IronMQ and IronWorker

IronMQ and IronWorker are elastic cloud services that scale to handle whatever messages and workloads you send it.

IronMQ is a reliable message queueing service perfect for building multi-tier applications. The service features push queues, error queues, message retries, alerts, and a number of other capabilities that are critical for separating internal app components and interfacing with third-party services. IronMQ supports asynchronous messaging, work dispatch, load buffering, database offloading, and more. Accessible through HTTP/REST API calls and client libraries, IronMQ is easy to use, highly available, and requires no setup or maintenance.

IronWorker is an elastic task queue / worker service that scales out the processing to let you focus on building applications that scale. Every production-scale application needs to do work in the background. IronWorker gives you an easy and reliable way to tens, hundreds, or thousands of tasks at once. Queue tasks from your app, run tasks via webhooks, or schedule jobs to run later. IronWorker supports all common languages including binary packages, offers task retries, and sophisticated task monitoring.

The Growth of Multi-Tier Architectures

Whether it’s deploying, monitoring, scaling, or making fail-safe, the base cloud stack has long been one where app servers and storage solutions are the core. This view is a valid one but only a partial picture. The reason is because cloud applications have become much more complex.

Instead of starting with a two-tier application – the application tier and the database tier  – developers are building multi-tier architectures from the onset. They are including components such as message queues, worker systems, key-value data caches, job schedulers, and other services to offload workloads from the main request/response loop and allow applications be more responsive and do more processing.

Multi-Tier Architectures Increase Scale and Agility
Production-scale cloud applications, for example, use message queues to provide ways to connect processes within systems, interface with other systems, buffer activity to databases, and power service-oriented architectures. They use worker systems to offload processing to the background, scale out processing across many concurrent tasks, or run tasks on regular schedules. Examples of these types of workflows can be creating thumbnails, sending emails and notifications, or hitting multiple apis to get data to display.

The Advantages of Cloud Services

Ready-to-use cloud-base services for message queueing and task processing create tremendous efficiencies and agility. By plugging into elastic cloud services, developers no longer have to stand up and maintain these infrastructure components. They do not have to make them redundant and provision them in multiple zones and regions.

Making message queuing and task processing readily available for Pivotal developers means that they get to build advanced processing capabilities into their applications from the start. With simple API calls, they can create queues, send and receive messages, and process hundreds or thousands of tasks, not just from day one but from minute one.

And they can do it without having to worry about managing servers or dealing with infrastructure or system concerns. The benefits of cloud-based messaging and background/async processing include:

  • Speed to market: applications and systems can be built much more quickly
  • Reduced complexity: reduced risk/overhead in critical but non-strategic areas
  • Increased scalability: ability to seamlessly scale throughput and functionality

Chad Arimura
" offers high-scale HTTP-based messaging and task processing services that accelerate the way cloud developers build distributed systems and create service-oriented architectures. These capabilities alongside the Pivotal Cloud Foundry platform is a powerful combination for developers creating production-scale applications."

Pivotal Cloud Foundry + = A Powerful Combination

Just as VMs have made it easier to create new applications, elastic on-demand message queues and asynchronous processing will power another era – large-scale distributed cloud-based systems where message queuing and high-scale task processing is abstracted away from servers and where ease of use, reliability, monitoring, and features specific for the cloud are key.

Developers win because they will be able to build and scale applications much more quickly, at a lower cost, and with far less complexity. is honored to be partnering with Pivotal in that we share the same mission in that is to drive this shift in computing and deliver this greater ease and much higher value.

Wednesday, February 19, 2014 Announces Alerts for IronMQ

Alerts can now be set on queues to trigger actions. is pleased to announce the release of alerts for IronMQ. IronMQ is a cloud-based message queuing service that offers high scale and high availability. It provides pull and push queues – meaning processes can get messages (pull) and events or the queue can push messages to processes and other endpoints.

Alerts have now been incorporated into IronMQ. This feature lets developers control actions based on the activity within a queue. With alerts, actions can be triggered when the number of messages in a queue reach a certain threshold. These actions can include things like auto-scaling, failure detection, load-monitoring, and system health. 

An Important Feature for Production-Scale Applications

IronMQ has been designed to be the message queue for the cloud. It can serve as a simple buffer between processes but it is also meant for more complex use. It offers push queues, HTTP/REST access, guaranteed one-time delivery, FIFO and now alerts. As a result, it’s even easier to build production-scale applications on cloud infrastructure. 

Instead of a monolithic app structure consisting of a bunch of app servers and a database, applications can be built right from the start as distributed systems ready to scale as needed to handle increasing workloads. Processes can be separated and scaled up and down effortlessly. More automated workflows can be created to deal with a varying amount of request/response loops and the resulting workloads that are at the backend of all the inputs.

Flexible Control of Alerts

Because alerts are so important, we put a flexible alert mechanism in place, giving developers fine-grained control over how they want to be alerted and under what circumstances. Users can select the trigger (or size of the message queue) as well as whether it should be a fixed or a progressive alert (one time or on a scaled basis every x messages). In the case of a progressive trigger, users can choose whether it’s ascending or descending. There’s also a snooze parameter that lets users limit the number of alerts within a certain period of time. 

Alerts are sent to an endpoint to notify which is an IronMQ queue that you define. This queue can be configured to trigger one or more actions. You can push to a single endpoint or you can fan-out several actions (up to 100 if you want). You can also kick off workers in IronWorker from this alert queue or send messages to other queues.  

This flexibility in settings and using a queue to deliver the alerts means that you can send the alert to a variety of processes and services. You can send messages to workers using ascending alerts, for example, to launch more servers to handle increasing workloads. (Alternatively, you can scale your servers down with descending alerts.) You can send notifications via SMS or emails using Twilio or SendGrid for example, or you can hit services like PagerDuty. Because an alert queue can be a push queues, you can communicate with any service that accepts a webhook. And in a world where webhooks are becoming pretty magical, this capability opens up a lot of possibilities that even we can’t predict. 

How to Use Alerts in IronMQ

One or more alerts can be set on a pull queue. Within the API alerts can added onto a queue by making a post to the queue receiving the alerts: 

POST /projects/{Project ID}/queues/{Queue Name}/alerts/

URL Parameters
  • Project ID: The project that the queue belongs to.
  • Queue Name: The name of queue to for the alert.
Body Parameters
  • Alerts: An array of alerts hashes containing required "type", "direction", "queue", "trigger", and optional "buffer" fields. Maximum number of alerts per queue is 5. 

Acceptable fields of an alert hash are:
  • type - required - "fixed" or "progressive". In case of alert's type set to "fixed", alert will be triggered when queue size pass value set by trigger parameter. When type is set to "progressive", alerts will be triggered when queue size passes any of the values calculated by trigger * N where N >= 1. (For example, if trigger set to 10, alert will be triggered at queue sizes 10, 20, 30,...)
  • trigger - required. It will be used to calculate actual values of queue size when alert must be triggered. See type field description. Trigger must be integer value greater than 0.
  • direction - required - "asc" or "desc". Set direction in which queue size must be changed when pass trigger value. If direction set to "asc" queue size must growing to trigger alert. When direction is "desc" queue size must decreasing to trigger alert.
  • queue - required. Name of queue which will be used to post alert messages.
  • snooze - optional. Number of seconds between alerts. If alert must be triggered but delay is still active, alert will be omitted. Snooze must be integer value greater than or equal to 0.

Note:  The IronMQ client libraries will follow a similar approach in terms of the array and hash fields. See the client library for the language of your choice for more specifics.

Sample Settings for Alerts

Setting up Auto-scaling for a Queue
To have a pull queue auto-scale the processing of messages, you can use a progressive alert. For example, set a progressive alert with a trigger of 1000 on a queue in the ascending direction for a queue entitled “worker_push_queue”. This pattern would send an alert to the “worker_push_queue” which can then trigger additional workers and allow for seamless auto-scaling.

  "type": "progressive",
  "trigger": 1000,
  "direction": "asc",
  "queue": "worker_push_queue"

Tuesday, February 18, 2014

Top 10 Uses of IronWorker

Developers tell us every day how much they love the IronWorker platform and how they use it. We wanted to share a number of these examples so that other developers have answers to the simple simple question “How do I use IronWorker?”

At, we’re all about scaling out workloads and performing work asynchronously. This list is a great encapsulation of that philosophy. It’s a pretty powerful set and we’re confident there are uses here that every developer can benefit from.

1.  Image Processing

Process Images in the Background
Pictures are a critical piece in consumer applications. If you’re not making use of them in your app then you’re missing out on ways to capture users and increase engagement. Nearly every use of photos requires some element of image processing whether that’s resizing, rotating, sharpening, watermarks, thumbnails, or otherwise. Image processing is, more often than not, compute-heavy, asynchronous in nature, and linearly scaling (more users mean more processing). These aspects all make it a great fit the flexible and elastic nature on IronWorker.

The most common libraries for image processing we see in IronWorker are ImageMagick, GraphicsMagick,  and LibGD. These packages are easy to use and provide some incredible capabilities. It’s easy to include them within a worker and then upload to IronWorker. The beauty of this use case is that image processing is typically an atomic operation. An image is uploaded, processed, and then stored in S3 or another datastore. There may be call-backs to the originating client or another event might be triggered but the processing is isolated and perfect for running within a distributed and virtual environment. Scaling something like this is IronWorker is as simple as sending IronWorker more tasks – very little additional work for developers in return, almost limitless scale.

2.  Web Crawling  |  Data Extraction

Access and Crawl Data at Scale
The web is full of data — from social, to weather, to real estate, to bitcoin transactions, data is available to access, extract, share, create derivatives, and transform in any number of ways. But crawling and extracting data from the web requires lots of concurrent processes that run on a continual or frequent basis. Another great fit for background processing and IronWorker.

Several great code libraries exist to help with webcrawling including packages such as PhantomJSCasperJSNutch, and Nokogiri – all of which run seamlessly on the IronWorker platform. As with image processing, web crawling is essentially a matter of including these packages within your worker, uploading them to IronWorker, and then crawling at will.

There might be a sequence of steps – grab a page, extract links, get various page entities, and then process the most important ones – in which case, additional workers can be created and chained together. To give you a good idea of what’s possible here, we've written several examples and blog posts that you can find here and here.

3.  Sending Push Notifications

Coordinate Push Notifications
A push notification is a message sent from a central server (publisher) to an end device (subscriber). The two most common platforms for sending push notifications are the Apple Push Notification Service for iOS, and the Google Cloud Messaging for Android.

Push notifications tend to go out in batches. For example, a breaking news alert might be sent to millions of subscribers. Notice of a flight delay might be sent to thousands of flyers. Sending these notifications out in serial batches takes way too long. A better architecture is to use IronWorker to deliver these push notifications through APNS and GCM in parallel. This approach also lends itself to processing the lists on the fly to either dedup lists or offer customized messages.

With a news alert, for example, you could spawn up 1000 workers in parallel that would each send 1000 batches of notifications serially. This would reach over a million news subscribers in the time it took to process a single set. This is a huge advantage on delivery speed and a capability that would be hard to create and manage on your own. With IronWorker, it’s a relatively simple matter to get this type of concurrency and throughput.

4.  Mobile Compute Cloud

Process Mobile Events in the Background
Mobile applications push a lot of the processing off the device and into the background. Services and frameworks like Parse and Firebase allow for rapid mobile app development by providing backend services such as user management and mobile app-centric datastores.

But these frameworks don’t work so well when it comes to providing processing capabilities. (Parse Cloud Code, as an example, provides a number of capabilities but falls short in many ways). Processing lots of evented data is where IronWorker shines.

Data can be put on a message queue directly from mobile apps and then workers in IronWorker can be running to continually process events from the queue. The processing that’s performed is entirely dependent on the needs of the app.

Alternatively, the mobile frameworks mentioned above also allow connections to HTTP webhooks. You can make these endpoints point to workers which can then be kicked off to perform actions. Using IronWorker as an asynchronous (and almost serverless) processing engine making building powerful mobile applications a breeze.

5.  Data Processing

“Big data” is certainly a hot topic these days and Hadoop is a common answer. But all data is not “big” and even when it is, many “big-data” problems don’t work well with a map-reduce model. A couple of supporting articles on this theme can be found here and here.

Process Data on a Continual Basis
In the end, a large amount of “big data” use cases essentially boil down to large scale “data processing” and IronWorker is made for this. Let’s say you have a big list of zip codes and need to pull weather data from a weather API as well as population data from a different API which times out after 10 concurrent connections. Traditional “big data” solutions are simply too complex to manage situations like this. IronWorker provides a flexible but still massively parallel way to accomplish this.

Or Scale-out Your Processing
with Task-Level Granularity 
You can run tasks in parallel as well as perform complex workflows. High concurrency can be brought to bear so that 1000s of processes can run at a single time. Alternatively, you can put constraints on the processing so that only a limited number of workers run at a single time. In the case above, setting a max concurrency would ensure that you don’t exceed the 10 connection limit on the population API.

As with web crawling, tasks can be chained together and results stored in a cache or other datastore or placed on a queue for additional processing or aggregating results. The platform is flexible and powerful enough to process almost any type of data – big, small, hot, cold, or anywhere in between.

Monday, February 17, 2014

How One User Automated a Research Study with IronWorker and Twilio (a repost from Usability Panda)

Katarzyna Stawarz, a PhD student at University College London, wrote a really nice blog post on using Twilio and IronWorker to communicate via SMS. She was conducting a study that tested a method of habit formation research and needed a way to send out reminders to participants at specific times during the day, every day, for 4 weeks.

Originally, Katarzyna was going to do it manually but then decided to dust off her programming skills and automate the whole thing. Brilliant!

She used Twilio for the SMS, of course, along with IronWorker for scheduling and async processing and after a just a bit of coding, she was up and running in no time flat – receiving and responding to several thousand study responses and managing 1,000+ reminders.

Katarzyna explains as to just how easy it was to use IronWorker with Twilio:
I needed a process running on my server to fire up reminders at a specific time and since I haven’t touched servers and their settings for at least 6 years, I wasn’t very keen to suddenly do that. So I used IronWorker to trigger my Twilio code. It was free and surprisingly easy to use, with everything nicely explained.
Here's another quote we couldn't pass up.
Twilio + IronWorker (with code!)
If you want to run a study that requires sending or receiving SMS (or both!), the Twilio + IronWorker combo is a great solution. It’s easy to set up and affordable. One of my colleagues already re-used my code to run her study (although she triggered her messages manually) and was quite happy with the tech. So yay for Twilio and :-)

To read more about Katarzyna's use case, visit her blog Usability panda or find her on twitter @falkowata!

To learn more about how IronWorker can help your app effortlessly perform work in the background asynchronously, please visit today.

Friday, February 14, 2014 Drinkup – Booze Queues' Edition

In keeping with all the love going around today, wanted to let you know about an Drinkup we're hosting with our friends at Keen IO next week (Wed, Feb 19th). It'll be at our offices at Heavybit. In keeping with the theme, here are details in the form of some JSON. Drink up.

  event: {
    name: "Booze Queues' Happy Data Hour",
    type: "meetup",
    pretty_timestamp: "Wednesday, February 19th, 6:30pm-8:30pm",
    location: {
      venue :"Heavybit",
      street_address: "325 9th Street",
      city: "San Francisco",
      state: "CA",
      zip: 94103
    beer: true,
    snacks: true,
    good_times: true,
    host: " & Keen IO"

Other Events Next Week

We also have a couple other events going on next week with our Iron faithful.
  1. Wednesday, Feb 19th GoSF meetup at Heroku
    • Food & Drink!
    • Talk 1: Building Distributed Systems with Mesos + Go
    • Talk 2: Stream Multiplexing in Go 
    • Talk 3: Dependency Management

  2. Thursday, Feb 20th SFRails meetup at Blurb
    • Food & Drink!
    • Talk 1: Caching and HTTP Acceleration with Varnish
    • Talk 2: Introduction to Docker + Tips on using Docker
    • Tech Talk: GitHub repo discovery via Sourcegraph

So, join us and introduce yourself to one of our evangelists (@yaronsadka and @stephenitis)...they might have a couple shirts to dish out.