Search This Blog


Wednesday, April 1, 2015

Roam Directories Goes Serverless With and Other Cloud Services

Users expect immediate access to information, and this expectation is no different in the commercial real estate industry. Fast-moving companies need innovative web tools that enable property managers to upload, update and exchange building information with prospective tenants.

Roam Directories – Creating a New Era of
Commercial Real Estate Directories

Roam Directories, founded in 2013, is a relatively new company in an industry filled with established firms. They created a commercial real estate directory that provides unique and engaging experiences for prospective tenants, while empowering property managers to deliver a rich set of materials that provide an enhanced view of a property. 

To make this possible, Roam Directories built the Atlas directory, an interactive, digital touchscreen display that shows building tenants, visitors, and prospective tenants up-to-date photos, videos, architectural drawings, and other materials about the building they are visiting. The Atlas interface design and workflow that Roam Directories created for property managers is a big part of their success. Also key is the way they address process automation and IT infrastructure management to keep information up to date. The combination gives them fast innovation and reduced costs that lets Roam Directories offer the Atlas service at a highly competitive price.

From Application-Driven to Event-Driven Processing

In addition to delivering innovative design and interaction, a key goal for Roam Directories was to migrate their infrastructure to a “serverless environment” by employing cloud services. They wanted to reduce operational overhead, cut out non-essential capital acquisition, and eliminate worries about VMs, load balancers, and other application and data center concerns. 

In making this transition, Roam Directories leveraged a number of cloud-based services that execute key tasks, such as data processing, imaging handling, user registration, authentication, email distribution, and social media streams. Their processing moved from application-driven to event-driven. Instead of large monolithic applications running constantly in the background, they moved to microservices (i.e. task-specific services running in the cloud that are triggered based on events, automated schedules, and other asynchronous application activities).

Dennis Smolek, CTO and Founder, Roam Directories
 “Our biggest goal is to move our entire application to be 100% serverless. Naturally there are challenges related to things like user authentication, priorities, and processing. Our application does not do a ton of data handling on it’s own as we’ve done a good job leveraging third party services...We leverage other services to handle the tasks that a server/cluster normally would.” says Dennis Smolek, CTO and Founder of Roam Directories.

Roam Directories was in a fortunate position of being able to carefully select among a growing catalog of technologies to accelerate their transition to the cloud. This freedom meant choosing not only the best products but also selecting ones that didn’t create vendor lock-in or require specific platforms, languages, patterns, or process flows.

This diagram that illustrates the task automation process at Roam Directories.

Enabling Lean and Agile Development Processes

A big part of the migration for Roam Directories to a serverless infrastructure was leveraging the platform as their main event-driven workload processor. This change allowed them to improve process efficiency and reduce costs in keeping with their lean and agile philosophy. 

Now, email notifications, user registration, content filtering and monitoring services are all pushed to the cloud and managed by workers running within IronWorker, an asynchronous task-processing service provided by IronWorker delivers the muscle behind the scenes by efficiently orchestrating the individual tasks that are processed on demand as part of the Atlas service.

By leveraging the IronWorker service, Roam Directories is able to offload key tasks such as mass email events to the background, and thus free up valuable resources and save time as well as scale out the workload. Instead of using serial processes that could take hours, the company takes advantage of on-demand scale to distribute the work and shrink the duration. 

A large number of the events and workloads requires Roam Directories to push outbound services and data to the Atlas touchscreen displays. Another set of equally important activity is related to data input. “Without a server to poll or query other data sources or opening up our datastore to less secure third-party services, we were left with a big question on how [to get data into our system] would work. We’ve leveraged workers and scheduled tasks within the IronWorker service to connect to all sorts of API’s and feeds and then decide what other actions to take,” according to Smolek.

This switch not only eliminates having resources run idle, it also lets them respond quickly to new data sources and inputs. To bring data in, they simply write some task-specific code, create a schedule, upload to IronWorker, and run it. 

This diagram illustrates a number of these scheduled tasks and how IronMQ and IronWorker play key roles in the processes.

Another benefit realized by Roam Directories, using this event-driven architecture, involves social media streams. A favorite example is what they’re doing with Twitter.
Twitter’s streaming API allows users to ‘listen’ to feeds and sources like hashtags or even just words in a string. We were originally going to have a server up and running 24/7 whose only job was to listen to Twitter.  
It seemed very wasteful and expensive. Now with workers, we pull our listeners (users and hashtags) from Google’s Firebase service and initiate a stream to Twitter. Every 30 minutes, the worker restarts itself. As each tweet comes in, it automatically gets queued and then fires up another worker that processes the tweet, decides if we are tracking it, and sends it to WebPurify (a profanity filter and image moderation service) to make sure it’s clean. It then pushes the tweet into our Firebase account. 
We are working to improve this a bit but it has made us go from polling and delayed processing to near real-time twitter tracking with the security that the content that shows on our screens will be moderated and filtered. All of this at scale, hundreds of tweets automatically queued up for processing with concurrent workers running and making it super fast. 
– Dennis Smolek, CTO and Founder, Roam Directories.

The Move to Event-Driven Processing

At the beginning of the project, Roam Directories considered a few alternatives to When asked, Dennis explained how he arrived at his decision to use IronWorker. “I started with beanstalkd and Gearman but that meant dedicated boxes/services for workers, so I looked at SQS but that didn’t actually handle processing the message which IronWorkers do so well,” said Smolek.

These other task processing solutions may require significant effort to connect the components and orchestrate the workflows. Ops teams also must regularly manage the components and servers that perform the processing. The IronWorker platform provides the orchestration, management, and processing including retries, priority queues, monitoring, reporting, and more.

Automation is key for small startups and teams. Tools, like Zapier, are great to handle connecting one app to another, but with a full application you need to have more flexibility and management...With, we have much higher levels of control and monitoring 
Going serverless is an insane money saver. For many front-end/support applications, a large portion of server time is spent idling. And no matter how well you design your systems to scale you will have a ton of CPU/Storage/Instances doing nothing but cost you money. We are on a developer plan with and we expect to save at least $2,000/mo. We are a very early stage company and so that kind of savings is huge. 
– Dennis Smolek


We’re pleased that the folks at Roam Directories are such strong fans of IronWorker. And we’re always glad to hear stories that reinforce use cases where can help growing companies like Roam Directories move quickly, scale with little effort, and realize big cost savings along the way.


About Dennis Smolek

Dennis Smolek is CTO and founder of Roam Directories. He has worked in the interactive space for the past 10 years starting his own creative agency and developing high end interactive solutions.

About Roam Directories

Roam Directories' mission is to create a new era of directories that deliver a unique experience to office buildings. With a focus on functionality, design, and customization, Roam Directories do more than simply list information like companies and contacts. Incorporating familiar concepts from web and mobile design such as high-impact images, quality typography, and interactive layouts, Roam's touchscreen interfaces stand out from competitors.


How to Get Started 

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

Friday, March 27, 2015

Super Easy Serverless Slack Bots

Slack has a great API for building integrations and one of those types of integrations is called a "bot". Bots are useful tools that can respond to messages the chat users type into a chatroom. For instance you could type in "what is the weather" and the bot would respond with today's weather.

Bots are simply software programs that run on a server somewhere and when someone types in a special sequence of characters, in Slack, these usually start with a '/', the message is sent to the bot. The bot then responds with whatever answer it wants to give and that answer is posted back to the chatroom.

Way cool. Buuuut... you have to run the bots on a server somewhere that Slack can communicate with and it always has to be running whether it's being used or not. True PITA.

IronWorker is an event driven processing (EDP) system and responding to commands/messages is what it does best. So why not respond to chat events? Whenever a keyword or slash command is typed into Slack, an IronWorker will execute to respond to the request. No servers required! No waste either as the worker bot will only run when it's called and will stop as soon as it's finished responding. Perfect.

Hello World Example

Here I'll show you how to make the simplest slack bot in the world. When you type /hello it will post “Hello World!” to the room.

1) Write our bot

Here's the code for hellobot:

The code above should be pretty straight forward, we get a message from Slack (the payload), then we send "Hello World!" back to Slack on the right channel. It's in Ruby, but it could be in any language.

Now let's tie everything together and get it working.

2) Get Incoming Webhook URL from Slack

In Slack, go to Integrations, then Incoming Webhooks, then click Add. Choose a channel, then click Add again. Slack will provide you with a webhook URL. Create a filed called config.json with the following content:

Replace the webhook_url string in config.json with the one that Slack provided.

3) Test the Bot / Worker

Since this is Ruby, we need a Gemfile to define our dependencies.

Install the gems to current directory so we can run them in Docker and for uploading to IronWorker.

Here’s a sample of the POST body Slack will send to the bot.

Copy and paste this into a file named slack.payload.

Now run it to test it with the example payload.

You should see Hello World! in #random now!

Ok, we’re all good, let’s upload it to IronWorker.

4) Upload to IronWorker

Now it's time to upload it to IronWorker so Slack can send messages to it and the IronWorker platform will take care of the rest.
Grab the URL it prints to the console and go to it in your browser, it will look something like this:

On that page, you’ll see a Webhook URL, it will look something like this:

Copy that URL, we'll use it in the next step.

5) Create a Slash Command in Slack

In Slack, go to Integrations, find Slash Commands, click Add, type in /hello as the command then click Add again. On the next page, take the IronWorker’s webhook URL you got in the step above and paste it into the URL field then click Save Integration.

6) Try it out! Type /hello into a Slack channel

Alright, now it's time to try the whole thing out. Go to a slack room and type /hello.

You should see this:


This bot isn't really that useful, but it's a good one to get you started and a good template to build more useful bots from. I've got a few more example bots I'll post in the weeks to come in the GitHub repo below and we'd love to hear about any IronBots that you make. If they're good, we'll share them too.

You can find the full source for this example here:

Thursday, March 19, 2015

The New Docker Based IronWorker Development Workflow

Creating a simple IronWorker worker can be quick and easy. But if you’ve used IronWorker to build a complex worker, you may have found the experience a bit cumbersome. 

The reason is that sometimes you just don’t know if your worker will run the same way it does when you run it locally, due to the local environment or perhaps missing dependencies. 

The typical development process for building an IronWorker looked something like this:

  1. Write/debug your worker.
  2. Upload your worker – this can take some time large with a lot of dependencies - even more so if you are doing remote builds (remote builds are sometimes required to ensure your code or dependencies with native extensions are built on the right architecture).
  3. Queue a task to run your worker.
  4. Wait for it to run (may take a few seconds).
  5. View the log on the console, aka HUD (another few seconds to pull it up).
  6. Repeat… over and over until it works right.

If you have to do a lot of debugging, this can waste a lot of time and cause some serious pain.

Introducing a New Workflow

This upload process has changed because now you can test your worker locally in the exact same environment as when running on the IronWorker platform using’s public Docker images. Plus, you can upload workers and interact with the system with a new CLI tool written in Go.

The new workflow is only slightly different:

  1. Ensure all dependencies for your worker are in the current directory or in sub-directories.
  2. Create an input/payload example file (check this into source control as an example).
  3. Build/run/test your worker inside an image container.
  4. Debug/test until you get it working properly.
  5. Once it works as expected, upload it to IronWorker. 

That's it. You should only need to do this process once, unless you want to make changes. The reason being is that if your worker works inside our Docker images, it will work the same way when it’s running on the IronWorker platform.

The process may similar on paper, but it’s a big change in practice. There are a number of benefits you get from this new workflow:

  • No Ruby dependency to use the command line tools. The new cli tool is written in Go.
  • No remote builds necessary. You can build it locally no matter what operating system you are using.
  • No packaging “magic.” The iron_worker_ng cli tool did a lot of work to create IronWorker code packages that worked across languages and operating systems.
  • No .worker files to define your dependencies.
  • Faster development.

Try It Using Some New Examples

We’ve created a repository with examples in a bunch of different languages so you can try it out:

Note: you can still do things as you've always been doing, but we believe this provides a better, more consistent, and quicker development process.

Give it a try and let us know what you think!

Wednesday, March 18, 2015

Why Large Scale Drupal Users Need To Increase Application Responsiveness

We’re always pleased to receive stories about how products have helped make peoples’ lives easier. This is the story about how Schwartz Media from the land down under, addressed the challenging task of managing and deploying multiple Drupal sites on the Pantheon platform. 
Owen Kelly
Director of Technology
Schwartz Media

Owen Kelly is the Director of Technology at Schwartz Media, publisher of The Saturday Paper, the Monthly and Quarterly Essay. Owen and his team brought IronMQ into their Pantheon-based framework to manage Push queues for processing PHP jobs. These jobs perform a variety of tasks, all processing in the background. 

By distributing tasks, Schwartz Media was able to accelerate the primary event loop, while allocating and scaling out asynchronous activities. The net benefit is faster processing with less overhead for the Schwartz Media technology team.

Josh Koenig

"If deploying and managing websites is a team sport, then large scale publishers like Schwartz Media represent the Pro Leagues. Pantheon is committed to the success of publishers with large scale Drupal implementations and we're excited that Schwartz Media has achieved such an agile framework with Pantheon and" says Josh Koenig, Co-Founder and Head of Developer Experience at Pantheon.

Quick Integration

After a quick 2-hour integration, Owen was able to accelerate new and updated site deployments by moving processing jobs to IronMQ. As part of this implementation, IronMQ pushes payloads to the Schwartz Media receiver, which then processes the job. By implementing this distributed approach, Owen and his team delivered a secure container-based approach to prevent sharing of Personally Identifiable Information (PII). In order to provide failsafe processing, the team uses cron jobs to clean-up any missed jobs that might error-out within the receiver.

“Everything is fast again [with IronMQ], and our subscriptions team is happy,” said Owen Kelly.

Schwartz Media is a great success story as evidenced by the power of leveraging IronMQ for Drupal deployments. We’re always big fans of stories about content management and delivery, as they drive a vast majority of use cases. It’s especially appealing within Drupal-based applications because many cases involve large implementations running at extreme scale and availability.

“With we have built a really simple background job processor, that saves our customer support team up to 5 seconds on every transaction they do. We create a job on our end, pop the job ID in the MQ, the MQ then pushes the ID back to our receiver which completes the job. And it took less than 2-hours to build, test and deploy.” according to Kelly.  

Talk about measurable benefits. We wish Owen and his team continued success, and we look forward to seeing more developments to come.

How to Get Started 

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how processing at scale will change the way you think about application development.

Monday, March 16, 2015 Releases Enhanced Security Package – IP Whitelisting, VPN, and VPC Support is pleased to announce the release of an enhanced security package for our IronMQ and IronWorker platforms. This package includes access to virtual IP addresses and support for virtual private networks (VPN) and virtual private clouds (VPC).

Elastic IP Addresses (EIP) – This feature provides access to the list of virtual IP addresses that workers are running on. This access enables users to set up network whitelisting policies that permit traffic only from verified IP addresses. This is a major benefit for processing workflows that pass through firewalls and other physical or virtual boundaries.

Virtual Private Networks (VPN) – A virtual private network (VPN) lets network operators manage a variety of services and components with the public cloud but send and receive data as if it were directly connected to a private network. Users benefit from the flexibility of operating in a shared environment while simultaneously gaining all the functionality, security, and management policies of a private network. can assist you in configuring VPNs that include IronWorker or IronMQ clusters, so that you get the best of both worlds – enhanced security and advanced event-based computing services.

Virtual Private Clouds (VPC) – IronWorker and IronMQ clusters can now be provisioned in a virtual private cloud within AWS. Users can then have traffic routed through private IPs using VPC peering. This capability provides dedicated network policies that can be customized to suit any workload or processing capability. Using a VPC provides an extended layer of isolation enhancing security while still allowing flexibility for flexible high-scale processing.

Availability of Security Package

These security features are available now for customers on Dedicated plans and the capabilities can be expanded in accordance with customer requirements. For example, for IP white listing, a set of virtual IP addresses can be provisioned based on the concurrency levels within your IronWorker plan. VPN and VPC configurations can be configured based on zone and region with AWS.

Please contact one of our account representative to discuss the features in this enhanced package or to set up an architectural review for a deeper dive.

Dedicated Clusters for Increased Workload Processing 

Dedicated IronWorker clusters are recommended for customers with heavy workloads and/or stricter requirements around task execution and job latency. A ‘dedicated cluster’ means sets of workers are provisioned to run tasks on a dedicated basis for specific customers. Clusters can vary in concurrency – starting at 100 workers and extending into the thousands. The benefits include guaranteed concurrency and strict latencies on task execution.

Recommended use cases for dedicated workers include:

Push Notifications – Many media sites are using dedicated workers to send out push notifications for fast-breaking news and entertainment. These media properties have allocated a specific number of dedicated workers giving them guaranteed concurrency to handle the steady flow of notifications. The dedicated nature and easy scalability of the clusters means they’re able to meet their demanding targets for timely delivery.

Event and Stream Processing – Customers are also employing dedicated clusters to process events asynchronously in real-time or near real-time - typically for offloading tasks from their main event loop (as in the case of web or mobile apps) or processing event streams directly (as in the case of IoT applications).

Image, Audio, and Video Processing – Other customers use dedicated clusters to provide for maximum concurrency and minimal processing latency for processing large quantities of digital media files.

How to Get Started 

To give a try, sign up for a free IronWorker or IronMQ account today at

As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how processing at scale will change the way you think about application development.

What are you waiting for? Simple and scalable processing awaits.

Wednesday, March 4, 2015

Chance to win $500 with your story in the AirPair $100K developer writing competition is a proud sponsor of the AirPair $100K developer writing competition. As part of our community engagement, we invite you to submit a post for a chance to win a $500 prize for best articlePosts can take the form of your narrated experience using in production – including problems solved, lessons learned and wisdom gained. 

To kickstart ideas on topics we’d love to read about, here are key areas of interest:
  • Tell us about your experiences applying event-driven asynchronous processing or moving from a monolithic app environment to a microservices architecture.
  • What are the “Top 5” reasons you choose IronMQ/IronWorker to accelerate your distributed computing deployment.

To enter a post or for more details about the competition, go to:

Good luck. We look forward to reading your submission. team

Tuesday, February 17, 2015

An Outside View on Microservices : Agility and Scale

Alex Bakker, a research director at Saugatuck Technology,  just put out a pretty good post on microservices entitled Agility, Microservices, and Digital Business. It provides a good overview on the topic of Microservices and is especially strong on what microservices mean to larger organizations and the enterprise. 

It's the first of several posts which are part of a larger research report – Evaluating Microservices Part 1: A Path to the Cloud (link).

Here's an excerpt:
The use of these APIs and Microservices will enable companies to develop additional services, capabilities and applications without replacing existing systems. This gives business a tremendous amount of agility to extend existing applications. 
In Digital Business, this agility is of paramount importance. Most companies that are attempting to transform themselves into Digital Businesses are facing challenges with speed of development, and their ability to react quickly to demand. Microservices allow companies to have separate, smaller development teams that can develop services to support new products, temporary promotions, new integrations, and the ability to scale in the Cloud that will not need to interfere with the existing operations and development cycles on larger, existing applications.

Definitely recommend taking a look at the full post

We're big believers that microservices and composable service architectures are important trends in application development and provide an answer to every ever-increasing development backlogs.

In our post on The Ephemeral Life of Dockerized Microservices, we talk about the use of Docker-based containers to provide event-computing services. In this type of asynchronous processing pattern, containers are only in existence for the duration of the process – providing a highly effective means for powering microservices. 

In our post on Smart Endpoints. Smart Pipes. Smarter Microservices , we talk about the combination of smart services and smart pipes. The idea is to decouple services without derailing your application. Smarter message and workload handling help you do this.

Thursday, February 5, 2015

Smart Endpoints. Smart Pipes. Smarter Microservices.

This is the second post in a series on microservices. Read our previous post on the service computing environment: The Ephemeral Life of Dockerized Microservices.

The use of microservices is a growing trend in application development because it provides an answer to the overhead incurred by monolithic applications. The practice of breaking apart application components into independent services introduces more moving parts to manage, however, requiring a robust communication method in order to keep services and data in sync. The idea is to decouple services without derailing your application.

The common thought within the microservices community is that only the endpoints need to be smart, not the pipe itself. The endpoints being the services that produce and consume messages, and the pipe being the inter-service communication layer. For example, a registration service captures user information, validates the data, and then encrypts the payload before delivering to an account service. The account service then decrypts the payload and writes to a database. In passing the message from the producer to the consumer, the pipe performs no additional work other than its delivery.

While removing any logic from the communication layer is a step in the right direction, there is still a need for smarts in the transport – it’s just how smart that makes all the difference.

Queue all the Things

A key exercise in building out a microservices application is deciding what components to decouple – the general thought being independent processes that run outside of the immediate user response loop. These could be background jobs, long running tasks, or data processing services, for example. Asynchronous processing allows for asynchronous communication, with the message queue as the broker between services.

Get Smart
A "dumb" pipe in the world of message queues is one that provides no more than a mere transport layer. This is certainly a step up from passing data over pure HTTP or TCP/IP as the queue can still act as a buffer when consumers can't be reached or are overloaded, however there's little to no insight into what goes on while in motion. Business is far too data driven these days to overlook its transit – with the continued proliferation of connected devices only increasing the sheer volume of data being passed around. Imagine losing an order from a billing service to a fulfillment service – just the possibility is unacceptable from a technical and business perspective.

Persistence is a key trait that makes a message queue evolve from being “dumb” to being “smart”, but what does that really mean? Message queues are not databases or caches in the traditional sense, but can provide persistence by writing to disk during the queue process. The general pattern is write once, read once, and then delete when acknowledged by the consumer. The acknowledgement, or ack, is a critical step in ensuring the persistence is working as it’s intended.

There’s more to being "smart" than just message persistence, though. Messages should be delivered once and in a first-in first-out (FIFO) manner. To put that into relevant terms, consider an airplane seating system – client applications produce a request, while the service checks availability and associates an open seat with the user. If the system does not consume those requests one by one in their intended order, it could lead to errors in how it makes those reservations... and angry passengers. Okay, bad example, but you get the idea.

About Performance
Many open source message queue products aim to be the most performant with throughput as the key benchmark (messages/second). While certainly not a metric to ignore, we have yet to encounter a business use case where throughput outweighed reliability. Modern hardware and software have made the difference all but negligible in asynchronous messaging patterns, with network latency being the more important metric that has any noticeable impact on performance requirements.

We have continued to optimize the performance of IronMQ, while maintaining the reliability expected from our customer base. A "smart" message queue such as IronMQ that is persistence by design with one-time FIFO delivery provides the right balance between durability and simplicity that is in line with the microservices architectural style.

The Next Word

We've now covered service computing and communication in this series – stay tuned for the next post where we bring it all together with an API gateway.

To get started with IronMQ for free, sign up for an account today. It's the "smart" thing to do ;)

About the Author
Ivan Dwyer is the Director of Channels and Integrations at, working with various partners across the entire cloud technology and developer services ecosystem to form strategic alliances around real world business solutions.

UPDATE: We wrote a whitepaper detailing all the ins and outs of building out microservices applications.

Download for free

Tuesday, January 27, 2015 Speaking on Docker in Production [Feb 11th]

Travis Reeder, CTO and co-founder at, and Reed Allman, a systems engineer, will be talking about Docker in Production at an upcoming meetup.

Luke Marsden from ClusterHQ, John Fiedler from RelateIQ, and Jérôme Petazzoni from Docker will also talk abou their experiences on the subject. The ClusterHQ meetup will be held on Wed, Feb 11th at Heavybit Industries in San Francisco, CA.

Here's the description of their talk:

Docker in Production: Lessons from Launching 500M Containers 
Travis Reeder and Reed Allman will talk about Docker's use in production at The two will discuss the requirements and process for integrating Docker into IronWorker, its event-based computing service, as well as benefits and challenges faced when using it in production at high scale. They will also discuss the future possibilities around using Docker as a general deployment tool including using registries and wrapping Go binaries in Docker to provide any number of orchestration tools 

Other speakers include:
• Luke Marsden, ClusterHQ
• John Fiedler, RelateIQ
• Jérôme Petazzoni, Docker

Meetup: Docker in Production

About the Speakers

Travis Reeder is co-founder and CTO of, heading up the architecture and engineering efforts. He is a systems architect and hands-on technologist with 15 years of experience developing high-traffic web applications including 5+ years building elastic services on virtual infrastructures. He is an expert in Go and is a leading speaker, writer, and proponent of the language. He has written several widely popular posts on Go and Docker.

Reed Allman is a system-level engineer for working in Go along with Docker and RocksDb to solve hard problems within high-scale fault-tolerant distributed systems. Prior to, he worked on a research project with Google to build refactoring tools for the Go language. By his estimation, he's read the language spec more times than is healthy and has gained a somewhat irrational view of programming in anything that doesn’t have channels.

How Docker Helped Us
For additional background on our use of Docker, take a look at the following posts we wrote on the subject.

How Docker Helped Us Achieve the (Near) Impossible
In this post, we discuss the decisions behind using Docker,  the requirements we had going in, and more details on what it enables us to do.

Docker in Production

In this post, we address some of the challenges we faced in running a Docker-based infrastructure in production, how we overcame them, and why it was worth it.

To try IronWorker for free, sign up for an account at We’ll even give you a trial of some of the advanced features so that you can see how processing at scale will change the way you view modern application development. 

Thursday, January 22, 2015

The Ephemeral Life of Dockerized Microservices

When using the word 'ephemeral', it's hard not to think of Snapchat these days, however the concept applies to the on demand computing pattern we promote here at with our task processing service IronWorker. At a glance, each Docker container in which a task runs is only alive for the duration of the process itself, providing a highly effective environment for powering applications that follow the microservices architectural style.

Long Live the Container

As Docker continues to spread through the industry by promising a standardized, encapsulated runtime across any environment, an entire ecosystem has emerged around containers from their orchestration to their hosting. We were early adopters with our initial use case, and continue to further leverage the technology through multi-cloud deployments and integrations.

While deploying distributed applications within a Dockerized framework is on the fast track to be the model of the future, a number of concerns around security, discovery, and failure have been introduced when approached from a production-ready mindset. Without digging into those topics too deep, let's look at where Docker makes sense today, and why we've been so successful with it as a core component of our platform.

People have been surprised by our heavy use of Docker in production, however the nature of IronWorker lends itself well to the current state of Docker without as much worry for the drawbacks. That's certainly not to say we haven't had our own set of challenges, but we treat each task container as an ephemeral computing resource. Persistence, redundancy, availability – all the things we care so much about when building out our products at the service level, do not necessarily apply at the individual task container level. Our concern in that regard is essentially limited to ensuring runtime occurs when it’s supposed to, allowing us to be confident in our heavy use of Docker today.

To give a peek under the hood of IronWorker, we have of a number of base Docker images stored in block storage (EBS) that provide language/library environments for running code (15 stacks and counting). Users write and package their code with only the dependent libraries for the task and then upload to our file storage (S3). The IronWorker API allows users to run any task at a set concurrency level, either on demand or on a schedule. Tasks are placed in an internal queue (IronMQ) and then pulled by one of our many task execution servers.

These task execution servers, or "runners" as we like to call them, merge the selected base Docker image with the user's code package in a fresh container, run the process, and then destroy the container. Rinse and repeat at massive scale. This streamlined process is very clean and fast, and we are continually working hard to tighten up even further by optimizing the task queue and improving the container startup time.

Dockerized Microservices

Wikipedia defines microservices as, "a software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task." This is in contrast to the monolithic approach where every component is embodied in a single and often cumbersome application framework.

While decoupling app components is not a new concept, microservices provide a more modern approach. What's often missing from the discussion, though, is the computing environment. Where do these individual processes actually live and run? One of the key benefits of the microservices style is more streamlined orchestration at the individual service level, however scaling and orchestrating infrastructure can get expensive and complex as you separate more and more components if you’re not extra careful.

The ephemeral use of Docker described here applies to microservices as the concept is to have independently developed and deployed services that each follow a single responsibility. Whether it be sending emails and notifications, processing images, placing an order, posting to social media – these processes should run asynchronously outside of the immediate user response loop. This means they really don’t need to be hosted in the traditional sense, they only need to be triggered by an event and run on demand.

This is where IronWorker comes into play – aside from providing a workload-aware computing environment fit for any task, handles all of the operations, provisioning, and processing of your microservices for you in a highly efficient and effective manner. This means that you can keep your focus on developing code, without having to worry about how to deploy, manage, and scale. As microservices evolves to be the pattern for building modern cloud applications, having a dynamic platform like IronWorker to handle the bulk of the work will be crucial throughout the entire development lifecycle.

The Next Word

Not every service is a microservice, and there's still the topic of dealing with handling requests, state and inter-service communication. At the end of the day, a microservices application is meant to be a single application, and it must all come together in a unified manner. Stay tuned for the next post where we talk about those smart pipes.

To get started with IronWorker for free, sign up for an account today. Our containers may be ephemeral, but our service and support are lasting!

About the Author
Ivan Dwyer is the Director of Channels and Integrations at, working with various partners across the entire cloud technology and developer services ecosystem to form strategic alliances around real world business solutions.

UPDATE: We wrote a whitepaper detailing all the ins and outs of building out microservices applications.

Download for free.