Search This Blog


Tuesday, April 21, 2015

Hubble Gets Lean With Microservices and

As microservices continues to spread through the industry as a dominant pattern for building modern cloud applications, marquee examples from large-scale companies such as Netflix and Twitter may appear daunting to companies still on a growth path. When powering through agile cycles to release new features at a rapid pace, the last thing on your mind is maintainability. Well, maybe not the last thing, but it is a lesser concern.

It’s rare to have the foresight to recognize future bottlenecks early on, so when we came across a series of blog posts by Tom Watson, the CTO and Co-founder of Hubble who did just that, we took notice and had a quick chat to discuss his experiences. As it turns out, taking a moment to reflect on how things scale as they grow actually put them in a position to release features quicker and more effectively by injecting a lean methodology of focused microservices development patterns and operations.

Hubble is an online marketplace for London office space that began in January of 2014. Coming out of Entrepreneur First, a European accelerator program that brings like-minded people together, the founders Tom and Tushar Agarwal met to form a company to solve the many challenges of searching and finding the right space to work. (If London is anything like San Francisco, then we can most certainly sympathize!) The premise of Hubble is to connect hosts who have spare space with tenants who need it. These hosts range from people who have a spare desk to startups who may have five desks available in their office. Once the connection is made, the platform facilitates an open dialog between the hosts and the tenants to serve each other's needs.

Ditching the Monolith

As a small team going through the rigors of an accelerator program, priority #1 was to get up and running as quickly as possible, so they picked Django as the framework for the MVP. Full stack frameworks such as Django and Ruby on Rails are a great way to quickly prototype and build core functionality, but can quickly become bloated with dependencies. This makes onboarding new developers a challenge to ensure all the right packages are in place across the whole development lifecycle, and it slows down deployment as a single entity for tests and builds. Speed is important to a startup, so after gaining some traction early on, Tom recognized the bottlenecks with their monolithic application and looked for a better architecture pattern that would give them greater development speed and effective scalability as they grew.

Getting Distributed and Going Micro

Moving from the monolithic application pattern to microservices seems like a monumental undertaking on the surface as it’s a completely different approach to structure, however it doesn’t have to be an all-in switch. One of the key benefits of the pattern is the ability to tackle piece by piece without losing the work that had already been done. This is how Hubble approached the process after reading up on the subject and talking with other startups who were already further along in their own lightweight service-based approach.

Through analyzing the core feature set, they were able to identify candidates that each fit the single responsibility principle. This is in line with the microservices pattern of separating components based on their business objective. The first obvious feature to split out was billing; all the direct processing and payment info objects. The next feature was messaging; how messages get sent between users and link up as threads. Once going through the process a few times, it became second nature, and piece by piece the application became less monolithic and more streamlined via microservices.

“Over time we plan to keep doing that sensibly so that we’re spending more time building features and less time worrying about infrastructure.”

- Tom Watson, CTO, Hubble

The API Gateway

When moving towards a microservices architecture, one consideration is ensuring requests are delivered to the proper service. A common approach, which Hubble adopted using Node.js and Express, is to have an API Gateway that handles routing and authentication. This lightweight layer accepts requests from the clients and routes to the individual microservice. Each service is considered to be in a trusted network, and is accessed through a private token, with authorization handled at the component level to avoid any duplication.

Queue all the Things

As Hubble split out more and more components, one thing became very clear – they needed a message queue to communicate between services in a reliable manner as opposed to direct execution. After first looking into RabbitMQ and Redis, they found IronMQ, which better served their needs.

“I wanted something that was hosted and easy to use, because I was trying to stay as lean as possible. I didn’t really want to have that overhead of DevOps. With IronMQ, not only did it do what I wanted, it also took a lot of the hassle away,” said Watson. “The message queue is such a critical piece of an architecture, but it's one of those that you just don't want to maintain.”

Asynchronous Processing

Once Hubble spent some time working with IronMQ, they came to a realization that much of the work they had split into microservices was better suited to run asynchronously. Each service is stateless with only the required dependencies for the task, making IronWorker a logical extension as it not only provided a streamlined environment for developing and deploying the individual microservice functionality, but it also provided for more effective scalability. If the community picks up and more people interact through messaging, those workers can scale up and down on-demand without affecting the rest of the application.

“Because you’re dealing with stateless microservices,” said Watson, “one could even foresee a time where you just did all of your logic in IronWorker.”

The Microservices Future

As an early adopter of modern cloud patterns and technologies such as microservices and, Hubble has formed a lean organization that can rapidly deploy new features at a fraction of the time and cost as they would have if they kept down the monolithic path. A new architecture comes with a new set of considerations, of course, but the benefits are clear. “There’s still a lot of figuring out to do with the future of application development, but what’s been cool about it is how the community’s evolved and how people have really figured out some interesting ways to solve complex problems,” said Watson. “Things like that weren’t around when Netflix started out are going to make people think about microservices in a completely different way.”

About Hubble

Tom Watson is CTO and Co-Founder at Hubble. He studied Computer Science and while at University, after a short stint at IBM, he realized that building a startup was what he really wanted to do straight after graduating. Since then he co-founded Hubble and has sought to make the tech behind it as cutting-edge as possible. You can read his original blogs on microservices here.

Hubble helps startups and small companies find their perfect home. They are an online marketplace for office space, matching those looking to rent space with co-working spaces, serviced offices and people who just have a few spare desks. Currently the platform is only available in London (UK) but they hope to expand that in the coming months.

How to Get Started Today

To give a try, sign up for a free IronWorker or IronMQ account today.

As a reward for signing up, we’ll even extend to you a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

Friday, April 17, 2015

Creating Microservices in Laravel (repost), Laravel, Workers, and Microservices
We came across a great tech post the other day by developer and writer Alfred Nutile. His post describes a simple process for doing background processing and creating microservices within Laravel, a fantastic PHP framework for modern web developers.

Background Processing and Microservices

Github estimated that over 40% of workloads are processed in the background. At, we have a number of customer stories that back this up including Untappd. In a detailed case study we show how they greatly reduced their user response times by moving 10 different events to the background and processing them using IronWorker.

Creating microservices is an extension of this, essentially formalizing the concept of a worker into a task-specific API-driven service that is highly available and can be run on-demand. The benefits of moving from a monolithic application to a more distributed one are many. They include faster response times (by moving certain events to the background), more effective scaling, a more robust application, and much faster feature development.
In computing, microservices is a software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled, and focus on doing a small task.
– Wikipedia

Wednesday, April 8, 2015

How HotelTonight Streamlined their ETL Process Using IronWorker

HotelTonight has reinvented the task of finding and booking discounted hotel rooms at travel destinations. Designed for last-minute travel planners and optimized for the mobile era, HotelTonight connects adventure-seeking, impulse travelers with just-in-time available hotel rooms wherever they land. 

This model has the market-enhancing effect of reducing excess inventory of unused hotel rooms, while delivering a seamless user experience and deep discount for budget travelers who enjoy impulse travel and adventure. What most travelers may not realize, is that behind the scenes at HotelTonight lies a massive business intelligence system that uses a sophisticated cloud-based ETL platform that collects, converts, and stores data from multiple external services. 

Extract, Transform, Load (ETL) has been around in IT circles for a long time, dating back even to tape storage and the mainframe era, but the difference here is the use of cloud-based services along with a loosely-coupled and flexible approach to move data between systems in near real-time. The benefits include far less overhead and much faster workload processing, while translating into more timely and accessible information with which to make decisions.

Cloud-based ETL - Scalable and Event Driven

This HotelTonight ETL pipeline gathers external data from a host of sources and brings it together into Amazon Redshift, a managed, petabyte-scale data warehouse solution provided by Amazon Web Services. Amazon Redshift acts as the “Unified Datastore” and makes use of the SQL query language to connect a variety of platforms using a Postgres Adapter. Custom Ruby scripts power the HotelTonight ETL process, connecting the Business Intelligence team there to the SQL Workbench which front-ends the Amazon Redshift clusters. The dashboard lets anyone in the organization query the data and extract information for use in their initiatives.

The net result of this complex operation is a fully aggregated dataset that is more accurate, more up-to-date, and more reliable. Turning raw data into reliable, up-to-date information enables HotelTonight analysts to make faster decisions and faster updates on available hotel room information for their users.

The key cog powering this cloud-based ETL process – also allowing it to be scalable and completely event-driven – is IronWorker, an asynchronous task-processing service provided by HotelTonight uses IronWorker as the “go-to platform for scheduling and running our Ruby-based ETL worker pipeline.” says Harlow Ward, former lead developer at HotelTonight.

Harlow Ward, former lead developer at HotelTonight
“The team at has been a great partner for us while building the ETL pipeline,” says Ward. “Their worker platform gives us a quick and easy mechanism for deploying and managing all our Ruby workers.”

Harlow further describes how IronWorker ensures HotelTonight’s ETL process is repeatable, scalable and protected in the case of failures. “Keeping [worker] components modular allows us to separate the concerns of each worker and create a repeatable process for each of our ETL integrations," says Ward.

A Distributed ETL Workflow

HotelTonight uses a custom worker for each external data source. (see figure 1 for details of HotelTonight data sources) This independence means that aggregation of each data source is independent of other sources.

Figure 1: HotelTonight Data Sources and Workflow

“IronWorker’s modularity allows for persistent points along the lifetime of the pipeline. It also allows [HotelTonight] to isolate failures and more easily recover should data integrity issues arise.” according to Ward. “Each worker in the pipeline is responsible for its own unit of work and has the ability to kick off the next task in the pipeline.”

For a detailed discussion of Harlow’s ETL process at work, check out Harlow’s blog at:

This distributed pattern also improves agility in that changes can be made quickly within one worker/data source pull without causing a need to redeploy a full application or push changes beyond that particular workflow. New data sources can also be brought on line just by writing simple scripts in whatever language the developers want to use. (Ruby in the case of HotelTonight.)

Workflow Monitoring and Orchestration

In addition to solving the challenge for quick and easy deployment of independent workers, the dashboard (HUD) provides current status and reporting information to HotelTonight developers giving them instant visibility and insight on the state of their ETL pipeline. Users can control settings for the workflow, including increasing or decreasing concurrency, retrying tasks that may have failed in prior attempts, and changing job schedulers. “The administration area boasts excellent dashboards for reporting worker status and gives us great visibility over the current state of our pipeline,” says Ward.

Figure 2: HUD dashboard of current worker status

Leveraging Unified Data for Faster Decision Making

Now that HotelTonight’s business intelligence data is consolidated in Amazon Redshift, HotelTonight can run SQL queries to combine and correlate data from multiple platforms into a unified dataset. Prior to this solution, HotelTonight’s “data analytics” consisted of multiple exported CSVs from each data source, merged into a single pivot table and then applying lots of “magic” to make sense of it all. 

IronWorker makes it possible for HotelTonight to streamline and automate their entire ETL process and bring together all of their disparate data sources in a flexible datastore. HotelTonight can rest easy with the assurance that, in using IronWorker, their data pipeline into Amazon Redshift is in excellent order. 

At, we’re big users of HotelTonight and can’t wait to book our next business road show using their service. We wouldn’t think of doing it any other way.


How to Get Started Today

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even extend to you a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

Tuesday, April 7, 2015

CEO Chad Arimura Speaking at IoT Stream Conference in April

Chad Arimura, CEO and Co-Founder of, will be speaking at IoT StreamConf conference in April. We're also sponsors of the event.

This conference will bring together architects and builders to discuss best practices and the emerging IoT technology stack. If you want to collaborate with hands-on people solving real IoT challenges, IOT Stream Conf is the place to be.

Here's a description of Chad's talk:
Thur, 11:00am (Apr 23)
Harnessing Microservices and Composable services for Agility and Scale in IoT Applications
Chad Arimura
Fast-moving, agile organizations such as Netflix, Gilt and Untappd are embracing microservices as the new foundation for software development – a direct response to monolithic approaches of the past. A composable services architecture breaks application development into discrete, logical tasks that are better suited for handling event-driven workloads within distributed cloud environments. 
This session will review the best practices for developers who must address the challenges of deploying and managing service-driven architectures for IoT and stream-oriented workloads. 

The conference is hosted and organized by PubNub and is on Thursday, April 23 at the Bentley Reserve in San Francisco, CA. Other speakers and sponsors are from companies that include GE, Cisco, Intel, Microsoft, AT&T, Softlayer, Ericsson, and others. If you're at the conference, be sure to come up and say hello.

Wednesday, April 1, 2015

Roam Directories Goes Serverless With and Other Cloud Services

Users expect immediate access to information, and this expectation is no different in the commercial real estate industry. Fast-moving companies need innovative web tools that enable property managers to upload, update and exchange building information with prospective tenants.

Roam Directories – Creating a New Era of
Commercial Real Estate Directories

Roam Directories, founded in 2013, is a relatively new company in an industry filled with established firms. They created a commercial real estate directory that provides unique and engaging experiences for prospective tenants, while empowering property managers to deliver a rich set of materials that provide an enhanced view of a property. 

To make this possible, Roam Directories built the Atlas directory, an interactive, digital touchscreen display that shows building tenants, visitors, and prospective tenants up-to-date photos, videos, architectural drawings, and other materials about the building they are visiting. The Atlas interface design and workflow that Roam Directories created for property managers is a big part of their success. Also key is the way they address process automation and IT infrastructure management to keep information up to date. The combination gives them fast innovation and reduced costs that lets Roam Directories offer the Atlas service at a highly competitive price.

From Application-Driven to Event-Driven Processing

In addition to delivering innovative design and interaction, a key goal for Roam Directories was to migrate their infrastructure to a “serverless environment” by employing cloud services. They wanted to reduce operational overhead, cut out non-essential capital acquisition, and eliminate worries about VMs, load balancers, and other application and data center concerns. 

In making this transition, Roam Directories leveraged a number of cloud-based services that execute key tasks, such as data processing, imaging handling, user registration, authentication, email distribution, and social media streams. Their processing moved from application-driven to event-driven. Instead of large monolithic applications running constantly in the background, they moved to microservices (i.e. task-specific services running in the cloud that are triggered based on events, automated schedules, and other asynchronous application activities).

Dennis Smolek, CTO and Founder, Roam Directories
 “Our biggest goal is to move our entire application to be 100% serverless. Naturally there are challenges related to things like user authentication, priorities, and processing. Our application does not do a ton of data handling on it’s own as we’ve done a good job leveraging third party services...We leverage other services to handle the tasks that a server/cluster normally would.” says Dennis Smolek, CTO and Founder of Roam Directories.

Roam Directories was in a fortunate position of being able to carefully select among a growing catalog of technologies to accelerate their transition to the cloud. This freedom meant choosing not only the best products but also selecting ones that didn’t create vendor lock-in or require specific platforms, languages, patterns, or process flows.

This diagram that illustrates the task automation process at Roam Directories.

Enabling Lean and Agile Development Processes

A big part of the migration for Roam Directories to a serverless infrastructure was leveraging the platform as their main event-driven workload processor. This change allowed them to improve process efficiency and reduce costs in keeping with their lean and agile philosophy. 

Now, email notifications, user registration, content filtering and monitoring services are all pushed to the cloud and managed by workers running within IronWorker, an asynchronous task-processing service provided by IronWorker delivers the muscle behind the scenes by efficiently orchestrating the individual tasks that are processed on demand as part of the Atlas service.

By leveraging the IronWorker service, Roam Directories is able to offload key tasks such as mass email events to the background, and thus free up valuable resources and save time as well as scale out the workload. Instead of using serial processes that could take hours, the company takes advantage of on-demand scale to distribute the work and shrink the duration. 

A large number of the events and workloads requires Roam Directories to push outbound services and data to the Atlas touchscreen displays. Another set of equally important activity is related to data input. “Without a server to poll or query other data sources or opening up our datastore to less secure third-party services, we were left with a big question on how [to get data into our system] would work. We’ve leveraged workers and scheduled tasks within the IronWorker service to connect to all sorts of API’s and feeds and then decide what other actions to take,” according to Smolek.

This switch not only eliminates having resources run idle, it also lets them respond quickly to new data sources and inputs. To bring data in, they simply write some task-specific code, create a schedule, upload to IronWorker, and run it. 

This diagram illustrates a number of these scheduled tasks and how IronMQ and IronWorker play key roles in the processes.

Another benefit realized by Roam Directories, using this event-driven architecture, involves social media streams. A favorite example is what they’re doing with Twitter.
Twitter’s streaming API allows users to ‘listen’ to feeds and sources like hashtags or even just words in a string. We were originally going to have a server up and running 24/7 whose only job was to listen to Twitter.  
It seemed very wasteful and expensive. Now with workers, we pull our listeners (users and hashtags) from Google’s Firebase service and initiate a stream to Twitter. Every 30 minutes, the worker restarts itself. As each tweet comes in, it automatically gets queued and then fires up another worker that processes the tweet, decides if we are tracking it, and sends it to WebPurify (a profanity filter and image moderation service) to make sure it’s clean. It then pushes the tweet into our Firebase account. 
We are working to improve this a bit but it has made us go from polling and delayed processing to near real-time twitter tracking with the security that the content that shows on our screens will be moderated and filtered. All of this at scale, hundreds of tweets automatically queued up for processing with concurrent workers running and making it super fast. 
– Dennis Smolek, CTO and Founder, Roam Directories.

The Move to Event-Driven Processing

At the beginning of the project, Roam Directories considered a few alternatives to When asked, Dennis explained how he arrived at his decision to use IronWorker. “I started with beanstalkd and Gearman but that meant dedicated boxes/services for workers, so I looked at SQS but that didn’t actually handle processing the message which IronWorkers do so well,” said Smolek.

These other task processing solutions may require significant effort to connect the components and orchestrate the workflows. Ops teams also must regularly manage the components and servers that perform the processing. The IronWorker platform provides the orchestration, management, and processing including retries, priority queues, monitoring, reporting, and more.

Automation is key for small startups and teams. Tools, like Zapier, are great to handle connecting one app to another, but with a full application you need to have more flexibility and management...With, we have much higher levels of control and monitoring 
Going serverless is an insane money saver. For many front-end/support applications, a large portion of server time is spent idling. And no matter how well you design your systems to scale you will have a ton of CPU/Storage/Instances doing nothing but cost you money. We are on a developer plan with and we expect to save at least $2,000/mo. We are a very early stage company and so that kind of savings is huge. 
– Dennis Smolek


We’re pleased that the folks at Roam Directories are such strong fans of IronWorker. And we’re always glad to hear stories that reinforce use cases where can help growing companies like Roam Directories move quickly, scale with little effort, and realize big cost savings along the way.


About Dennis Smolek

Dennis Smolek is CTO and founder of Roam Directories. He has worked in the interactive space for the past 10 years starting his own creative agency and developing high end interactive solutions.

About Roam Directories

Roam Directories' mission is to create a new era of directories that deliver a unique experience to office buildings. With a focus on functionality, design, and customization, Roam Directories do more than simply list information like companies and contacts. Incorporating familiar concepts from web and mobile design such as high-impact images, quality typography, and interactive layouts, Roam's touchscreen interfaces stand out from competitors.


How to Get Started 

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

Friday, March 27, 2015

Super Easy Serverless Slack Bots

Slack has a great API for building integrations and one of those types of integrations is called a "bot". Bots are useful tools that can respond to messages the chat users type into a chatroom. For instance you could type in "what is the weather" and the bot would respond with today's weather.

Bots are simply software programs that run on a server somewhere and when someone types in a special sequence of characters, in Slack, these usually start with a '/', the message is sent to the bot. The bot then responds with whatever answer it wants to give and that answer is posted back to the chatroom.

Way cool. Buuuut... you have to run the bots on a server somewhere that Slack can communicate with and it always has to be running whether it's being used or not. True PITA.

IronWorker is an event driven processing (EDP) system and responding to commands/messages is what it does best. So why not respond to chat events? Whenever a keyword or slash command is typed into Slack, an IronWorker will execute to respond to the request. No servers required! No waste either as the worker bot will only run when it's called and will stop as soon as it's finished responding. Perfect.

Hello World Example

Here I'll show you how to make the simplest slack bot in the world. When you type /hello it will post “Hello World!” to the room.

1) Write our bot

Here's the code for hellobot:

The code above should be pretty straight forward, we get a message from Slack (the payload), then we send "Hello World!" back to Slack on the right channel. It's in Ruby, but it could be in any language.

Now let's tie everything together and get it working.

2) Get Incoming Webhook URL from Slack

In Slack, go to Integrations, then Incoming Webhooks, then click Add. Choose a channel, then click Add again. Slack will provide you with a webhook URL. Create a filed called config.json with the following content:

Replace the webhook_url string in config.json with the one that Slack provided.

3) Test the Bot / Worker

Since this is Ruby, we need a Gemfile to define our dependencies.

Install the gems to current directory so we can run them in Docker and for uploading to IronWorker.

Here’s a sample of the POST body Slack will send to the bot.

Copy and paste this into a file named slack.payload.

Now run it to test it with the example payload.

You should see Hello World! in #random now!

Ok, we’re all good, let’s upload it to IronWorker.

4) Upload to IronWorker

Now it's time to upload it to IronWorker so Slack can send messages to it and the IronWorker platform will take care of the rest.
Grab the URL it prints to the console and go to it in your browser, it will look something like this:

On that page, you’ll see a Webhook URL, it will look something like this:

Copy that URL, we'll use it in the next step.

5) Create a Slash Command in Slack

In Slack, go to Integrations, find Slash Commands, click Add, type in /hello as the command then click Add again. On the next page, take the IronWorker’s webhook URL you got in the step above and paste it into the URL field then click Save Integration.

6) Try it out! Type /hello into a Slack channel

Alright, now it's time to try the whole thing out. Go to a slack room and type /hello.

You should see this:


This bot isn't really that useful, but it's a good one to get you started and a good template to build more useful bots from. I've got a few more example bots I'll post in the weeks to come in the GitHub repo below and we'd love to hear about any IronBots that you make. If they're good, we'll share them too.

You can find the full source for this example and a bunch of other bots here:

Thursday, March 19, 2015

The New Docker Based IronWorker Development Workflow

Creating a simple IronWorker worker can be quick and easy. But if you’ve used IronWorker to build a complex worker, you may have found the experience a bit cumbersome. 

The reason is that sometimes you just don’t know if your worker will run the same way it does when you run it locally, due to the local environment or perhaps missing dependencies. 

The typical development process for building an IronWorker looked something like this:

  1. Write/debug your worker.
  2. Upload your worker – this can take some time large with a lot of dependencies - even more so if you are doing remote builds (remote builds are sometimes required to ensure your code or dependencies with native extensions are built on the right architecture).
  3. Queue a task to run your worker.
  4. Wait for it to run (may take a few seconds).
  5. View the log on the console, aka HUD (another few seconds to pull it up).
  6. Repeat… over and over until it works right.

If you have to do a lot of debugging, this can waste a lot of time and cause some serious pain.

Introducing a New Workflow

This upload process has changed because now you can test your worker locally in the exact same environment as when running on the IronWorker platform using’s public Docker images. Plus, you can upload workers and interact with the system with a new CLI tool written in Go.

The new workflow is only slightly different:

  1. Ensure all dependencies for your worker are in the current directory or in sub-directories.
  2. Create an input/payload example file (check this into source control as an example).
  3. Build/run/test your worker inside an image container.
  4. Debug/test until you get it working properly.
  5. Once it works as expected, upload it to IronWorker. 

That's it. You should only need to do this process once, unless you want to make changes. The reason being is that if your worker works inside our Docker images, it will work the same way when it’s running on the IronWorker platform.

The process may similar on paper, but it’s a big change in practice. There are a number of benefits you get from this new workflow:

  • No Ruby dependency to use the command line tools. The new cli tool is written in Go.
  • No remote builds necessary. You can build it locally no matter what operating system you are using.
  • No packaging “magic.” The iron_worker_ng cli tool did a lot of work to create IronWorker code packages that worked across languages and operating systems.
  • No .worker files to define your dependencies.
  • Faster development.

Try It Using Some New Examples

We’ve created a repository with examples in a bunch of different languages so you can try it out:

Note: you can still do things as you've always been doing, but we believe this provides a better, more consistent, and quicker development process.

Give it a try and let us know what you think!

Wednesday, March 18, 2015

Why Large Scale Drupal Users Need To Increase Application Responsiveness

We’re always pleased to receive stories about how products have helped make peoples’ lives easier. This is the story about how Schwartz Media from the land down under, addressed the challenging task of managing and deploying multiple Drupal sites on the Pantheon platform. 
Owen Kelly
Director of Technology
Schwartz Media

Owen Kelly is the Director of Technology at Schwartz Media, publisher of The Saturday Paper, the Monthly and Quarterly Essay. Owen and his team brought IronMQ into their Pantheon-based framework to manage Push queues for processing PHP jobs. These jobs perform a variety of tasks, all processing in the background. 

By distributing tasks, Schwartz Media was able to accelerate the primary event loop, while allocating and scaling out asynchronous activities. The net benefit is faster processing with less overhead for the Schwartz Media technology team.

Josh Koenig

"If deploying and managing websites is a team sport, then large scale publishers like Schwartz Media represent the Pro Leagues. Pantheon is committed to the success of publishers with large scale Drupal implementations and we're excited that Schwartz Media has achieved such an agile framework with Pantheon and" says Josh Koenig, Co-Founder and Head of Developer Experience at Pantheon.

Quick Integration

After a quick 2-hour integration, Owen was able to accelerate new and updated site deployments by moving processing jobs to IronMQ. As part of this implementation, IronMQ pushes payloads to the Schwartz Media receiver, which then processes the job. By implementing this distributed approach, Owen and his team delivered a secure container-based approach to prevent sharing of Personally Identifiable Information (PII). In order to provide failsafe processing, the team uses cron jobs to clean-up any missed jobs that might error-out within the receiver.

“Everything is fast again [with IronMQ], and our subscriptions team is happy,” said Owen Kelly.

Schwartz Media is a great success story as evidenced by the power of leveraging IronMQ for Drupal deployments. We’re always big fans of stories about content management and delivery, as they drive a vast majority of use cases. It’s especially appealing within Drupal-based applications because many cases involve large implementations running at extreme scale and availability.

“With we have built a really simple background job processor, that saves our customer support team up to 5 seconds on every transaction they do. We create a job on our end, pop the job ID in the MQ, the MQ then pushes the ID back to our receiver which completes the job. And it took less than 2-hours to build, test and deploy.” according to Kelly.  

Talk about measurable benefits. We wish Owen and his team continued success, and we look forward to seeing more developments to come.

How to Get Started 

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how processing at scale will change the way you think about application development.

Monday, March 16, 2015 Releases Enhanced Security Package – IP Whitelisting, VPN, and VPC Support is pleased to announce the release of an enhanced security package for our IronMQ and IronWorker platforms. This package includes access to virtual IP addresses and support for virtual private networks (VPN) and virtual private clouds (VPC).

Elastic IP Addresses (EIP) – This feature provides access to the list of virtual IP addresses that workers are running on. This access enables users to set up network whitelisting policies that permit traffic only from verified IP addresses. This is a major benefit for processing workflows that pass through firewalls and other physical or virtual boundaries.

Virtual Private Networks (VPN) – A virtual private network (VPN) lets network operators manage a variety of services and components with the public cloud but send and receive data as if it were directly connected to a private network. Users benefit from the flexibility of operating in a shared environment while simultaneously gaining all the functionality, security, and management policies of a private network. can assist you in configuring VPNs that include IronWorker or IronMQ clusters, so that you get the best of both worlds – enhanced security and advanced event-based computing services.

Virtual Private Clouds (VPC) – IronWorker and IronMQ clusters can now be provisioned in a virtual private cloud within AWS. Users can then have traffic routed through private IPs using VPC peering. This capability provides dedicated network policies that can be customized to suit any workload or processing capability. Using a VPC provides an extended layer of isolation enhancing security while still allowing flexibility for flexible high-scale processing.

Availability of Security Package

These security features are available now for customers on Dedicated plans and the capabilities can be expanded in accordance with customer requirements. For example, for IP white listing, a set of virtual IP addresses can be provisioned based on the concurrency levels within your IronWorker plan. VPN and VPC configurations can be configured based on zone and region with AWS.

Please contact one of our account representative to discuss the features in this enhanced package or to set up an architectural review for a deeper dive.

Dedicated Clusters for Increased Workload Processing 

Dedicated IronWorker clusters are recommended for customers with heavy workloads and/or stricter requirements around task execution and job latency. A ‘dedicated cluster’ means sets of workers are provisioned to run tasks on a dedicated basis for specific customers. Clusters can vary in concurrency – starting at 100 workers and extending into the thousands. The benefits include guaranteed concurrency and strict latencies on task execution.

Recommended use cases for dedicated workers include:

Push Notifications – Many media sites are using dedicated workers to send out push notifications for fast-breaking news and entertainment. These media properties have allocated a specific number of dedicated workers giving them guaranteed concurrency to handle the steady flow of notifications. The dedicated nature and easy scalability of the clusters means they’re able to meet their demanding targets for timely delivery.

Event and Stream Processing – Customers are also employing dedicated clusters to process events asynchronously in real-time or near real-time - typically for offloading tasks from their main event loop (as in the case of web or mobile apps) or processing event streams directly (as in the case of IoT applications).

Image, Audio, and Video Processing – Other customers use dedicated clusters to provide for maximum concurrency and minimal processing latency for processing large quantities of digital media files.

How to Get Started 

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how processing at scale will change the way you think about application development.

What are you waiting for? Simple and scalable processing awaits.

Wednesday, March 4, 2015

Chance to win $500 with your story in the AirPair $100K developer writing competition is a proud sponsor of the AirPair $100K developer writing competition. As part of our community engagement, we invite you to submit a post for a chance to win a $500 prize for best articlePosts can take the form of your narrated experience using in production – including problems solved, lessons learned and wisdom gained. 

To kickstart ideas on topics we’d love to read about, here are key areas of interest:
  • Tell us about your experiences applying event-driven asynchronous processing or moving from a monolithic app environment to a microservices architecture.
  • What are the “Top 5” reasons you choose IronMQ/IronWorker to accelerate your distributed computing deployment.

To enter a post or for more details about the competition, go to:

Good luck. We look forward to reading your submission. team