Search This Blog


Thursday, May 21, 2015

How Omaze Delivers Once in a Lifetime Experiences Using and Rackspace DevOps

Blow Sh*t Up with Arnold Schwarzenegger ... Be Drawn Into an Episode of the Simpsons ... Celebrate the Patriots Victory with Rob Gronkowski.

These aren't even bucket list items, these are unattainable items. That is, until Omaze gets involved. Omaze is an organization that was founded to drive significantly more money and awareness for deserving causes through the chance to live out dream experiences.

Charities offer up personalized events with their celebrity partners where everyone has the chance to win by donating to the cause. Each experience offers a range of reward levels from signed t-shirts to personalized Skype sessions to Twitter mentions, and once the experience is placed up on the Omaze site, the countdown begins to the winner of the grand prize. The growing number of high profile celebrities participating to provide such unique opportunities begs the question – what's your dream experience?

From a technical perspective, the nature of the Omaze model leads to large spikes in traffic when a new campaign is launched. Even though these spikes are fairly predictable, resource provisioning and operations are critical components to be optimized and streamlined. (You wouldn’t want Arnold Schwarzenegger blowing up your backend after all!)

To handle this level of elastic scalability as a growing team, Omaze found a great fit in both and Rackspace DevOps so that they could keep their focus on delivering the best experiences from top celebrities for great causes. At the end of the day, that’s what matters most.

Processing Workloads Asynchronously in the Background

In order to be more performant, Omaze recognized that a good portion of their workloads would be better run asynchronously in the background. When a new campaign is launched, they may get up to 10,000 donations in the first 5 minutes, where each one triggers a number of tasks from credit card billing to notification emails to database writes. Instead of processing these events synchronously and making the user wait, Omaze moved the tasks to IronWorker, where each could be triggered independently and run concurrently to handle the volume at scale on-demand.

More and more developers are taking advantage of this asynchronous pattern within their applications for more effective scalability and better user experiences. Decoupled frontends that focus on the immediate user response loop communicate via APIs to microservices that each perform a single responsibility. Here at, we estimate that around half of all workloads in a typical application are meant to be run asynchronously, however orchestrating and processing these workloads within distributed systems is extremely complex. The platform was built to solve this challenge in an industrial-strength manner that abstracts the complexities away from the developer, so that he or she can focus on writing the code that make their software compelling to end users.

The Modern Application Stack

Cloud Bursting to Distribute the Work 

A big part of how Omaze is able to raise so much awareness and money is through highly targeted email campaigns to its previous donors. These efforts tend to perform 20-25% better than other channels in terms of conversions – a testament to the overall satisfaction of the community. Who wouldn’t want another opportunity to live out a dream experience?

To effectively handle these large volumes of emails, Omaze leverages IronWorker to segment each campaign into more manageable chunks that are then distributed over a number of workers. A master worker task goes through the user base and determines the right segment for the user based on a number of parameters. The user data is delivered to IronMQ, where slave workers pull from each segment queue to process and deliver the email.

Omaze's Email Campaign Process

Continuous Integration and Choreography

A key component to Omaze’s rapid success with their growing team has been their ability to streamline operations in collaboration with the Rackspace DevOps business unit. Modern distributed applications have more moving parts than a traditional monolithic application, requiring advanced configurations around infrastructure provisioning, automated testing, continuous integration, continuous deployment, and workload management. Rackspace helps companies like Omaze transform their operations to get faster innovation, accelerated time to market, improved deployment quality, better operational efficiency, and more time to focus on their core business goals.

Omaze entered the Rackspace DevOps program in August 2014 and immediately noticed improvements through its managed services and tier-one triage support 24/7 across the globe. Promotions from large global brands such as Disney and Coca-Cola leads to up to 8 billion sessions per day, and each piece of the complete architecture needs dedicated attention from Chef scripts for server automation to New Relic for performance monitoring.

Not only does Rackspace package all of the tools needed to run and maintain a production-ready application at scale in a cohesive manner, they are on hand with the Fanatical Support Promise® that has made them a leader in their space. + Rackspace = Lean Transformation

“The less system administrative work we could take on the better, so it makes a ton of sense to let that be managed by people who are experts in tuning the queuing as well as the underlying operating parameters themselves for best performance.”

- David Lieberman, VP of Engineering, 

About Omaze

Omaze is an innovative platform to raise money and awareness for causes by offering all donors and fans the opportunity to win once-in-a-lifetime experiences with the world’s biggest celebrities. They've launched over 250 life-changing experiences — everything from a walk-on role in Star Wars: Episode VII to riding in an RV with the cast of Breaking Bad to going on a date with George Clooney in NYC. They’ve raised millions for worthy causes and generated significant awareness by regularly appearing in outlets like the Today Show, Vanity Fair, CNN, Good Morning America, Jimmy Kimmel, and many others.

Omaze is on a mission to reinvent charitable giving by creating a cause marketplace. Leveraging storytelling, social media marketing, celebrity influence, and data science, they help charities raise more funds and awareness and create greater impact than they ever have before. | @omaze

About Rackspace DevOps

Rackspace is the #1 managed cloud company. Its technical expertise and Fanatical Support® allow companies to tap the power of the cloud without the pain of hiring experts in dozens of complex technologies. Based in San Antonio, Rackspace serves more than 300,000 business customers from data centers on four continents.

With the Rackspace DevOps Automation solution, Rackspace automates entire application environments by treating infrastructure as code. Through configuration management tools such as Chef and Windows PowerShell® Desired State Configuration (DSC), Rackspace DevOps Engineers automate the deployment and scaling of applications and manages and supports their environments 24x7x365. DevOps Engineers continuously monitor application performance utilizing tools such as New Relic, Logstash, and Kibana to identify and respond to performance anomalies before they cause service disruptions.

Getting Started with

To give a try, sign up for a free account today.

As a reward for signing up, we’ll even extend to you a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

Friday, May 15, 2015

Upcoming Webinar on Developing Microservices for IoT Applications

Webinar: Microservices for IoT Applications will be hosting a webinar on May 21, 2015 on the topic of microservices for IoT applications. The modern IT stack for Internet of Things is just starting to form. This webinar discussion will address how data inputs and workload processing fits within this stack and how developers can use message queues and async processing capabilities to develop flexible and scalable solutions.

Harnessing Microservices for Agility and Scale in IoT applications

When: Thursday May 21st at 10:00 AM PST

Fast-moving, agile organizations such as Netflix, Gilt, and Untappd are embracing microservices as the new foundation for software development – a direct response to more costly and cumbersome monolithic approaches of the past. A composable services architecture breaks application development into discrete, logical tasks that are better suited for handling event-driven workloads within distributed cloud environments.

The microservices approach yields:

  • Faster feature developer within more flexible computing environments
  • Improved scalability through matching event loads to resource utilization
  • Simple migration from legacy architectures into modern distributed cloud application

  • Please register here to attend.

    About the Presenter
    Chad Arimura is the CEO/Co-Founder of Chad is an expert developer and cloud architect with over 10 years experience leading technology teams in high-growth startups. He combines his development and engineering expertise with his product marketing and sales experience to drive the team to build the world's best cloud infrastructure services.

    Monday, May 11, 2015

    Full Docker Support for IronWorker

    A couple of months ago, we announced a new workflow for IronWorker based on Docker that enabled you to test your worker code locally on the exact same environment as it is when running on the IronWorker cloud. The initial feedback we received was pretty consistent: “This is great, but can you make it more flexible when it comes to using images?”

    Up until now, you had to use one of our predefined images (or stacks as we used to call them). Although these images cover most major languages and OS packages that you might want, that didn’t match the needs of all users.  At, we know empowering the developer is key. That includes enabling developers to make choices about how they use Docker.

    Today, we’re announcing full Docker support allowing you to use any Docker image, including your own custom images. This feature is in beta and is only available on dedicated accounts to begin with, please contact us if you’d like to try it out.

    How to Use a Custom Docker Image

    Now on to the fun stuff. You can use any image that is available on Docker Hub and it’s almost no different than what you’re doing now with the DockerWorker workflow, you just need to tell us which image to use. To read more about the workflow and to run your first IronWorker, see here. The following explains the changes required to use your own custom image. Most of the changes apply to the upload function when you upload your worker code.

    The current way (which still works just fine) is:

    For example:

    The new way changes the format a bit to be more like docker run:

    For example:

    You can still use our existing stacks by using their full docker image name, for example the following is equivalent to the command above that uses --stack):

    It’s also possible to include your code inside the Docker image as a self contained runnable image and not upload it separately;

    Notice that one takes the docker image name only; there is no code package uploaded. See this example repository for a full example with code and Dockerfile.

    New Environment Variables

    Task information that was previously passed in as program arguments are now available as environment variables. The following environment variables are now set inside the container when it’s run:


    So to load the payload for your task in a custom image, look up the PAYLOAD_FILE env variable and read in the file. The old way where this is passed in as parameters is deprecated.

    Versioning and Updating Your Images

    In order for IronWorker to know about an update to your image, you should always version your images using Docker tags, then use the tag when uploading. For example

    Push it to docker hub:

    Then upload it to IronWorker WITH the tag.


    Full Docker support allows us meet the requests for custom images and image use flexibility. This also removed from being a gatekeeper for runtime images. With this new feature, new users can have a seamless experience, while users with more demanding needs have the flexibility the need.

    Friday, May 1, 2015

    DockerWorker Unplugged

    Today, the world revolves around developers. Digital businesses are becoming a significant part of the landscape. Traditional business thrives on its responsiveness to customers and how it handles business data. People used to talk about the Era of Information Technology, however now we’re in the Era of the Developer.

    Fast-moving businesses recognize the need to give developers the tools, platforms, and application services developers require to get things done. Equally important is getting obstructions out of the way of developers and allowing them to move fast. What do developers need to be successful in this modern world? They need self-service, on-demand capabilities, immediate scale, and little to no operations. Simply put, developers want to write code – and do so in a manner that lets them focus on writing code without having to manage tools and infrastructure. The overhead of managing infrastructure or dealing with a mismatch between development and production systems, steals precious cycles from a developer’s main driver – writing code.

    Asynchronous Developer Workflows

    Today, more than 50% of the processing workloads in a modern application takes place in the background, running asynchronously outside of the main response loop. This means in addition to an application layer (powered via a platform like Heroku, OpenStack, or CloudFoundry or running on VMs directly), developers also need background or asynchronous processing frameworks to address these event-driven computing workloads.  They need message queuing, task processing, and job scheduling along with the tooling to connect things together to make it easier for developers to upload tasks, run them at will or have them triggered based on specific events. Developers also need to know that these tasks run in a predictable, secure, and scalable manner. Asynchronous / event-driven computing is where is making strong inroads with developers and driving value for a growing number of fast-moving companies.

    Enter DockerWorker

    What makes’s story even more compelling is the way we leverage and support Docker containers. Our mission is to support simple and fast development workflows and not disrupt the developer process. Each task running within’s IronWorker runs within it’s own Docker container. We introduced this capability over a year ago and as a result have run over 500 million Docker containers. Read more about our experiences with Docker in the links below.

    Just a few weeks ago, we introduced the capability to upload tasks into IronWorker as Docker containers. As as result, developers are now able to build and test locally and then deploy an exact replica of their code package in the cloud when it’s ready for production. This improved process removes latencies associated with library loading, service dependencies, and upload times. Once the code resides on the platform, developers enjoy all the benefits of being able to queue tasks and achieve scalable workflows with almost zero devops. The result is reduced development (build and test) times, shorter deployment cycles (push to platform), and better scale and availability (production).

    In addition, other services can leverage this container-based model to be introduced into the workflow. Developers can continue to develop, test, run continuous integration and deploy to production and work in the workflow that they love and works best for their needs.

    In this video, Chad Arimura, CEO of, explains this process workflow and shares a brief example of how’s DockerWorker model works (via CLI or API). Once the code resides on the platform, developers enjoy all the benefits of being able to queue tasks and achieve scalable workflows. 

    For more information, check out the following resources:


    How to Get Started Today

    To give a try, sign up for a free IronWorker or IronMQ account today at

    As a reward for signing up, we’ll even extend to you a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

    Tuesday, April 21, 2015

    Hubble Gets Lean With Microservices and

    As microservices continues to spread through the industry as a dominant pattern for building modern cloud applications, marquee examples from large-scale companies such as Netflix and Twitter may appear daunting to companies still on a growth path. When powering through agile cycles to release new features at a rapid pace, the last thing on your mind is maintainability. Well, maybe not the last thing, but it is a lesser concern.

    It’s rare to have the foresight to recognize future bottlenecks early on, so when we came across a series of blog posts by Tom Watson, the CTO and Co-founder of Hubble who did just that, we took notice and had a quick chat to discuss his experiences. As it turns out, taking a moment to reflect on how things scale as they grow actually put them in a position to release features quicker and more effectively by injecting a lean methodology of focused microservices development patterns and operations.

    Hubble is an online marketplace for London office space that began in January of 2014. Coming out of Entrepreneur First, a European accelerator program that brings like-minded people together, the founders Tom and Tushar Agarwal met to form a company to solve the many challenges of searching and finding the right space to work. (If London is anything like San Francisco, then we can most certainly sympathize!) The premise of Hubble is to connect hosts who have spare space with tenants who need it. These hosts range from people who have a spare desk to startups who may have five desks available in their office. Once the connection is made, the platform facilitates an open dialog between the hosts and the tenants to serve each other's needs.

    Ditching the Monolith

    As a small team going through the rigors of an accelerator program, priority #1 was to get up and running as quickly as possible, so they picked Django as the framework for the MVP. Full stack frameworks such as Django and Ruby on Rails are a great way to quickly prototype and build core functionality, but can quickly become bloated with dependencies. This makes onboarding new developers a challenge to ensure all the right packages are in place across the whole development lifecycle, and it slows down deployment as a single entity for tests and builds. Speed is important to a startup, so after gaining some traction early on, Tom recognized the bottlenecks with their monolithic application and looked for a better architecture pattern that would give them greater development speed and effective scalability as they grew.

    Getting Distributed and Going Micro

    Moving from the monolithic application pattern to microservices seems like a monumental undertaking on the surface as it’s a completely different approach to structure, however it doesn’t have to be an all-in switch. One of the key benefits of the pattern is the ability to tackle piece by piece without losing the work that had already been done. This is how Hubble approached the process after reading up on the subject and talking with other startups who were already further along in their own lightweight service-based approach.

    Through analyzing the core feature set, they were able to identify candidates that each fit the single responsibility principle. This is in line with the microservices pattern of separating components based on their business objective. The first obvious feature to split out was billing; all the direct processing and payment info objects. The next feature was messaging; how messages get sent between users and link up as threads. Once going through the process a few times, it became second nature, and piece by piece the application became less monolithic and more streamlined via microservices.

    “Over time we plan to keep doing that sensibly so that we’re spending more time building features and less time worrying about infrastructure.”

    - Tom Watson, CTO, Hubble

    The API Gateway

    When moving towards a microservices architecture, one consideration is ensuring requests are delivered to the proper service. A common approach, which Hubble adopted using Node.js and Express, is to have an API Gateway that handles routing and authentication. This lightweight layer accepts requests from the clients and routes to the individual microservice. Each service is considered to be in a trusted network, and is accessed through a private token, with authorization handled at the component level to avoid any duplication.

    Queue all the Things

    As Hubble split out more and more components, one thing became very clear – they needed a message queue to communicate between services in a reliable manner as opposed to direct execution. After first looking into RabbitMQ and Redis, they found IronMQ, which better served their needs.

    “I wanted something that was hosted and easy to use, because I was trying to stay as lean as possible. I didn’t really want to have that overhead of DevOps. With IronMQ, not only did it do what I wanted, it also took a lot of the hassle away,” said Watson. “The message queue is such a critical piece of an architecture, but it's one of those that you just don't want to maintain.”

    Asynchronous Processing

    Once Hubble spent some time working with IronMQ, they came to a realization that much of the work they had split into microservices was better suited to run asynchronously. Each service is stateless with only the required dependencies for the task, making IronWorker a logical extension as it not only provided a streamlined environment for developing and deploying the individual microservice functionality, but it also provided for more effective scalability. If the community picks up and more people interact through messaging, those workers can scale up and down on-demand without affecting the rest of the application.

    “Because you’re dealing with stateless microservices,” said Watson, “one could even foresee a time where you just did all of your logic in IronWorker.”

    The Microservices Future

    As an early adopter of modern cloud patterns and technologies such as microservices and, Hubble has formed a lean organization that can rapidly deploy new features at a fraction of the time and cost as they would have if they kept down the monolithic path. A new architecture comes with a new set of considerations, of course, but the benefits are clear. “There’s still a lot of figuring out to do with the future of application development, but what’s been cool about it is how the community’s evolved and how people have really figured out some interesting ways to solve complex problems,” said Watson. “Things like that weren’t around when Netflix started out are going to make people think about microservices in a completely different way.”

    About Hubble

    Tom Watson is CTO and Co-Founder at Hubble. He studied Computer Science and while at University, after a short stint at IBM, he realized that building a startup was what he really wanted to do straight after graduating. Since then he co-founded Hubble and has sought to make the tech behind it as cutting-edge as possible. You can read his original blogs on microservices here.

    Hubble helps startups and small companies find their perfect home. They are an online marketplace for office space, matching those looking to rent space with co-working spaces, serviced offices and people who just have a few spare desks. Currently the platform is only available in London (UK) but they hope to expand that in the coming months.

    How to Get Started Today

    To give a try, sign up for a free IronWorker or IronMQ account today.

    As a reward for signing up, we’ll even extend to you a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

    Friday, April 17, 2015

    Creating Microservices in Laravel (repost), Laravel, Workers, and Microservices
    We came across a great tech post the other day by developer and writer Alfred Nutile. His post describes a simple process for doing background processing and creating microservices within Laravel, a fantastic PHP framework for modern web developers.

    Background Processing and Microservices

    Github estimated that over 40% of workloads are processed in the background. At, we have a number of customer stories that back this up including Untappd. In a detailed case study we show how they greatly reduced their user response times by moving 10 different events to the background and processing them using IronWorker.

    Creating microservices is an extension of this, essentially formalizing the concept of a worker into a task-specific API-driven service that is highly available and can be run on-demand. The benefits of moving from a monolithic application to a more distributed one are many. They include faster response times (by moving certain events to the background), more effective scaling, a more robust application, and much faster feature development.
    In computing, microservices is a software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled, and focus on doing a small task.
    – Wikipedia

    Wednesday, April 8, 2015

    How HotelTonight Streamlined their ETL Process Using IronWorker

    HotelTonight has reinvented the task of finding and booking discounted hotel rooms at travel destinations. Designed for last-minute travel planners and optimized for the mobile era, HotelTonight connects adventure-seeking, impulse travelers with just-in-time available hotel rooms wherever they land. 

    This model has the market-enhancing effect of reducing excess inventory of unused hotel rooms, while delivering a seamless user experience and deep discount for budget travelers who enjoy impulse travel and adventure. What most travelers may not realize, is that behind the scenes at HotelTonight lies a massive business intelligence system that uses a sophisticated cloud-based ETL platform that collects, converts, and stores data from multiple external services. 

    Extract, Transform, Load (ETL) has been around in IT circles for a long time, dating back even to tape storage and the mainframe era, but the difference here is the use of cloud-based services along with a loosely-coupled and flexible approach to move data between systems in near real-time. The benefits include far less overhead and much faster workload processing, while translating into more timely and accessible information with which to make decisions.

    Cloud-based ETL - Scalable and Event Driven

    This HotelTonight ETL pipeline gathers external data from a host of sources and brings it together into Amazon Redshift, a managed, petabyte-scale data warehouse solution provided by Amazon Web Services. Amazon Redshift acts as the “Unified Datastore” and makes use of the SQL query language to connect a variety of platforms using a Postgres Adapter. Custom Ruby scripts power the HotelTonight ETL process, connecting the Business Intelligence team there to the SQL Workbench which front-ends the Amazon Redshift clusters. The dashboard lets anyone in the organization query the data and extract information for use in their initiatives.

    The net result of this complex operation is a fully aggregated dataset that is more accurate, more up-to-date, and more reliable. Turning raw data into reliable, up-to-date information enables HotelTonight analysts to make faster decisions and faster updates on available hotel room information for their users.

    The key cog powering this cloud-based ETL process – also allowing it to be scalable and completely event-driven – is IronWorker, an asynchronous task-processing service provided by HotelTonight uses IronWorker as the “go-to platform for scheduling and running our Ruby-based ETL worker pipeline.” says Harlow Ward, former lead developer at HotelTonight.

    Harlow Ward, former lead developer at HotelTonight
    “The team at has been a great partner for us while building the ETL pipeline,” says Ward. “Their worker platform gives us a quick and easy mechanism for deploying and managing all our Ruby workers.”

    Harlow further describes how IronWorker ensures HotelTonight’s ETL process is repeatable, scalable and protected in the case of failures. “Keeping [worker] components modular allows us to separate the concerns of each worker and create a repeatable process for each of our ETL integrations," says Ward.

    A Distributed ETL Workflow

    HotelTonight uses a custom worker for each external data source. (see figure 1 for details of HotelTonight data sources) This independence means that aggregation of each data source is independent of other sources.

    Figure 1: HotelTonight Data Sources and Workflow

    “IronWorker’s modularity allows for persistent points along the lifetime of the pipeline. It also allows [HotelTonight] to isolate failures and more easily recover should data integrity issues arise.” according to Ward. “Each worker in the pipeline is responsible for its own unit of work and has the ability to kick off the next task in the pipeline.”

    For a detailed discussion of Harlow’s ETL process at work, check out Harlow’s blog at:

    This distributed pattern also improves agility in that changes can be made quickly within one worker/data source pull without causing a need to redeploy a full application or push changes beyond that particular workflow. New data sources can also be brought on line just by writing simple scripts in whatever language the developers want to use. (Ruby in the case of HotelTonight.)

    Workflow Monitoring and Orchestration

    In addition to solving the challenge for quick and easy deployment of independent workers, the dashboard (HUD) provides current status and reporting information to HotelTonight developers giving them instant visibility and insight on the state of their ETL pipeline. Users can control settings for the workflow, including increasing or decreasing concurrency, retrying tasks that may have failed in prior attempts, and changing job schedulers. “The administration area boasts excellent dashboards for reporting worker status and gives us great visibility over the current state of our pipeline,” says Ward.

    Figure 2: HUD dashboard of current worker status

    Leveraging Unified Data for Faster Decision Making

    Now that HotelTonight’s business intelligence data is consolidated in Amazon Redshift, HotelTonight can run SQL queries to combine and correlate data from multiple platforms into a unified dataset. Prior to this solution, HotelTonight’s “data analytics” consisted of multiple exported CSVs from each data source, merged into a single pivot table and then applying lots of “magic” to make sense of it all. 

    IronWorker makes it possible for HotelTonight to streamline and automate their entire ETL process and bring together all of their disparate data sources in a flexible datastore. HotelTonight can rest easy with the assurance that, in using IronWorker, their data pipeline into Amazon Redshift is in excellent order. 

    At, we’re big users of HotelTonight and can’t wait to book our next business road show using their service. We wouldn’t think of doing it any other way.


    How to Get Started Today

    To give a try, sign up for a free IronWorker or IronMQ account today at

    As a reward for signing up, we’ll even extend to you a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

    Tuesday, April 7, 2015

    CEO Chad Arimura Speaking at IoT Stream Conference in April

    Chad Arimura, CEO and Co-Founder of, will be speaking at IoT StreamConf conference in April. We're also sponsors of the event.

    This conference will bring together architects and builders to discuss best practices and the emerging IoT technology stack. If you want to collaborate with hands-on people solving real IoT challenges, IOT Stream Conf is the place to be.

    Here's a description of Chad's talk:
    Thur, 11:00am (Apr 23)
    Harnessing Microservices and Composable services for Agility and Scale in IoT Applications
    Chad Arimura
    Fast-moving, agile organizations such as Netflix, Gilt and Untappd are embracing microservices as the new foundation for software development – a direct response to monolithic approaches of the past. A composable services architecture breaks application development into discrete, logical tasks that are better suited for handling event-driven workloads within distributed cloud environments. 
    This session will review the best practices for developers who must address the challenges of deploying and managing service-driven architectures for IoT and stream-oriented workloads. 

    The conference is hosted and organized by PubNub and is on Thursday, April 23 at the Bentley Reserve in San Francisco, CA. Other speakers and sponsors are from companies that include GE, Cisco, Intel, Microsoft, AT&T, Softlayer, Ericsson, and others. If you're at the conference, be sure to come up and say hello.

    Wednesday, April 1, 2015

    Roam Directories Goes Serverless With and Other Cloud Services

    Users expect immediate access to information, and this expectation is no different in the commercial real estate industry. Fast-moving companies need innovative web tools that enable property managers to upload, update and exchange building information with prospective tenants.

    Roam Directories – Creating a New Era of
    Commercial Real Estate Directories

    Roam Directories, founded in 2013, is a relatively new company in an industry filled with established firms. They created a commercial real estate directory that provides unique and engaging experiences for prospective tenants, while empowering property managers to deliver a rich set of materials that provide an enhanced view of a property. 

    To make this possible, Roam Directories built the Atlas directory, an interactive, digital touchscreen display that shows building tenants, visitors, and prospective tenants up-to-date photos, videos, architectural drawings, and other materials about the building they are visiting. The Atlas interface design and workflow that Roam Directories created for property managers is a big part of their success. Also key is the way they address process automation and IT infrastructure management to keep information up to date. The combination gives them fast innovation and reduced costs that lets Roam Directories offer the Atlas service at a highly competitive price.

    From Application-Driven to Event-Driven Processing

    In addition to delivering innovative design and interaction, a key goal for Roam Directories was to migrate their infrastructure to a “serverless environment” by employing cloud services. They wanted to reduce operational overhead, cut out non-essential capital acquisition, and eliminate worries about VMs, load balancers, and other application and data center concerns. 

    In making this transition, Roam Directories leveraged a number of cloud-based services that execute key tasks, such as data processing, imaging handling, user registration, authentication, email distribution, and social media streams. Their processing moved from application-driven to event-driven. Instead of large monolithic applications running constantly in the background, they moved to microservices (i.e. task-specific services running in the cloud that are triggered based on events, automated schedules, and other asynchronous application activities).

    Dennis Smolek, CTO and Founder, Roam Directories
     “Our biggest goal is to move our entire application to be 100% serverless. Naturally there are challenges related to things like user authentication, priorities, and processing. Our application does not do a ton of data handling on it’s own as we’ve done a good job leveraging third party services...We leverage other services to handle the tasks that a server/cluster normally would.” says Dennis Smolek, CTO and Founder of Roam Directories.

    Roam Directories was in a fortunate position of being able to carefully select among a growing catalog of technologies to accelerate their transition to the cloud. This freedom meant choosing not only the best products but also selecting ones that didn’t create vendor lock-in or require specific platforms, languages, patterns, or process flows.

    This diagram that illustrates the task automation process at Roam Directories.

    Enabling Lean and Agile Development Processes

    A big part of the migration for Roam Directories to a serverless infrastructure was leveraging the platform as their main event-driven workload processor. This change allowed them to improve process efficiency and reduce costs in keeping with their lean and agile philosophy. 

    Now, email notifications, user registration, content filtering and monitoring services are all pushed to the cloud and managed by workers running within IronWorker, an asynchronous task-processing service provided by IronWorker delivers the muscle behind the scenes by efficiently orchestrating the individual tasks that are processed on demand as part of the Atlas service.

    By leveraging the IronWorker service, Roam Directories is able to offload key tasks such as mass email events to the background, and thus free up valuable resources and save time as well as scale out the workload. Instead of using serial processes that could take hours, the company takes advantage of on-demand scale to distribute the work and shrink the duration. 

    A large number of the events and workloads requires Roam Directories to push outbound services and data to the Atlas touchscreen displays. Another set of equally important activity is related to data input. “Without a server to poll or query other data sources or opening up our datastore to less secure third-party services, we were left with a big question on how [to get data into our system] would work. We’ve leveraged workers and scheduled tasks within the IronWorker service to connect to all sorts of API’s and feeds and then decide what other actions to take,” according to Smolek.

    This switch not only eliminates having resources run idle, it also lets them respond quickly to new data sources and inputs. To bring data in, they simply write some task-specific code, create a schedule, upload to IronWorker, and run it. 

    This diagram illustrates a number of these scheduled tasks and how IronMQ and IronWorker play key roles in the processes.

    Another benefit realized by Roam Directories, using this event-driven architecture, involves social media streams. A favorite example is what they’re doing with Twitter.
    Twitter’s streaming API allows users to ‘listen’ to feeds and sources like hashtags or even just words in a string. We were originally going to have a server up and running 24/7 whose only job was to listen to Twitter.  
    It seemed very wasteful and expensive. Now with workers, we pull our listeners (users and hashtags) from Google’s Firebase service and initiate a stream to Twitter. Every 30 minutes, the worker restarts itself. As each tweet comes in, it automatically gets queued and then fires up another worker that processes the tweet, decides if we are tracking it, and sends it to WebPurify (a profanity filter and image moderation service) to make sure it’s clean. It then pushes the tweet into our Firebase account. 
    We are working to improve this a bit but it has made us go from polling and delayed processing to near real-time twitter tracking with the security that the content that shows on our screens will be moderated and filtered. All of this at scale, hundreds of tweets automatically queued up for processing with concurrent workers running and making it super fast. 
    – Dennis Smolek, CTO and Founder, Roam Directories.

    The Move to Event-Driven Processing

    At the beginning of the project, Roam Directories considered a few alternatives to When asked, Dennis explained how he arrived at his decision to use IronWorker. “I started with beanstalkd and Gearman but that meant dedicated boxes/services for workers, so I looked at SQS but that didn’t actually handle processing the message which IronWorkers do so well,” said Smolek.

    These other task processing solutions may require significant effort to connect the components and orchestrate the workflows. Ops teams also must regularly manage the components and servers that perform the processing. The IronWorker platform provides the orchestration, management, and processing including retries, priority queues, monitoring, reporting, and more.

    Automation is key for small startups and teams. Tools, like Zapier, are great to handle connecting one app to another, but with a full application you need to have more flexibility and management...With, we have much higher levels of control and monitoring 
    Going serverless is an insane money saver. For many front-end/support applications, a large portion of server time is spent idling. And no matter how well you design your systems to scale you will have a ton of CPU/Storage/Instances doing nothing but cost you money. We are on a developer plan with and we expect to save at least $2,000/mo. We are a very early stage company and so that kind of savings is huge. 
    – Dennis Smolek


    We’re pleased that the folks at Roam Directories are such strong fans of IronWorker. And we’re always glad to hear stories that reinforce use cases where can help growing companies like Roam Directories move quickly, scale with little effort, and realize big cost savings along the way.


    About Dennis Smolek

    Dennis Smolek is CTO and founder of Roam Directories. He has worked in the interactive space for the past 10 years starting his own creative agency and developing high end interactive solutions.

    About Roam Directories

    Roam Directories' mission is to create a new era of directories that deliver a unique experience to office buildings. With a focus on functionality, design, and customization, Roam Directories do more than simply list information like companies and contacts. Incorporating familiar concepts from web and mobile design such as high-impact images, quality typography, and interactive layouts, Roam's touchscreen interfaces stand out from competitors.


    How to Get Started 

    To give a try, sign up for a free IronWorker or IronMQ account today at

    As a reward for signing up, we’ll even provide you with a 30-day trial of advanced features so that you can see how moving to the cloud will change the way you think about application development.

    Friday, March 27, 2015

    Super Easy Serverless Slack Bots

    Slack has a great API for building integrations and one of those types of integrations is called a "bot". Bots are useful tools that can respond to messages the chat users type into a chatroom. For instance you could type in "what is the weather" and the bot would respond with today's weather.

    Bots are simply software programs that run on a server somewhere and when someone types in a special sequence of characters, in Slack, these usually start with a '/', the message is sent to the bot. The bot then responds with whatever answer it wants to give and that answer is posted back to the chatroom.

    Way cool. Buuuut... you have to run the bots on a server somewhere that Slack can communicate with and it always has to be running whether it's being used or not. True PITA.

    IronWorker is an event driven processing (EDP) system and responding to commands/messages is what it does best. So why not respond to chat events? Whenever a keyword or slash command is typed into Slack, an IronWorker will execute to respond to the request. No servers required! No waste either as the worker bot will only run when it's called and will stop as soon as it's finished responding. Perfect.

    Hello World Example

    Here I'll show you how to make the simplest slack bot in the world. When you type /hello it will post “Hello World!” to the room.

    1) Write our bot

    Here's the code for hellobot:

    The code above should be pretty straight forward, we get a message from Slack (the payload), then we send "Hello World!" back to Slack on the right channel. It's in Ruby, but it could be in any language.

    Now let's tie everything together and get it working.

    2) Get Incoming Webhook URL from Slack

    In Slack, go to Integrations, then Incoming Webhooks, then click Add. Choose a channel, then click Add again. Slack will provide you with a webhook URL. Create a filed called config.json with the following content:

    Replace the webhook_url string in config.json with the one that Slack provided.

    3) Test the Bot / Worker

    Since this is Ruby, we need a Gemfile to define our dependencies.

    Install the gems to current directory so we can run them in Docker and for uploading to IronWorker.

    Here’s a sample of the POST body Slack will send to the bot.

    Copy and paste this into a file named slack.payload.

    Now run it to test it with the example payload.

    You should see Hello World! in #random now!

    Ok, we’re all good, let’s upload it to IronWorker.

    4) Upload to IronWorker

    Now it's time to upload it to IronWorker so Slack can send messages to it and the IronWorker platform will take care of the rest.
    Grab the URL it prints to the console and go to it in your browser, it will look something like this:

    On that page, you’ll see a Webhook URL, it will look something like this:

    Copy that URL, we'll use it in the next step.

    5) Create a Slash Command in Slack

    In Slack, go to Integrations, find Slash Commands, click Add, type in /hello as the command then click Add again. On the next page, take the IronWorker’s webhook URL you got in the step above and paste it into the URL field then click Save Integration.

    6) Try it out! Type /hello into a Slack channel

    Alright, now it's time to try the whole thing out. Go to a slack room and type /hello.

    You should see this:


    This bot isn't really that useful, but it's a good one to get you started and a good template to build more useful bots from. I've got a few more example bots I'll post in the weeks to come in the GitHub repo below and we'd love to hear about any IronBots that you make. If they're good, we'll share them too.

    You can find the full source for this example and a bunch of other bots here: