A Serverless Message Queue Without the Glue

More and more technologies get involved as systems grow, and it’s sometimes hard to keep track of what’s doing what. Caching layers, message queues, serverless functions, tracing frameworks… the list goes on.  Once you start sprinkling in public cloud services, you may find yourself developing your way into vendor lock-in.  All of the sudden, you’re dealing with one cloud, tons of services, and having to glue everything together in order to make the services talk to each other.  One of Iron’s primary goals is to make life easier for developers, and IronMQ’s little known “Push Queue” feature is one that can help prevent you from having to write the glue.

What are Push Queues?

IronMQ has a built-in feature called Push Queues, which when enabled, fire off an event any time a message gets pushed onto that queue. This comes in extremely handy when you immediately want to “do something” (or many things) with that message. With traditional message queues, you’d usually need to write another process that polls your queues for messages at a given duration. MQ’s push queues can fire off events to different types of endpoints, each extremely helpful in their own way.

What Type of events can be triggered?

HTTP
When a message gets put onto your push queue, IronMQ can make a POST request (with the message in the request body) to any URL of your choice. This is extremely handy when you want to notify other systems that some sort of event just happened or kick off another process.

MQ
Inception! You can have the delivery of a message populate another IronMQ queue. This is helpful if you want to tie multiple queues together or create a dead letter queue for example.

Worker
MQ can connect directly to IronWorker and pass its message as the payload to one of your jobs. How cool is that!?  In order to exemplify how cool that actually is, we’ll run through a real-life scenario.

MQ & Worker Example

Let’s say you have a time-sensitive nightly job that processes uploaded CSV files.  It needs to process all of the files uploaded during that day and finish within a set amount of time.   As your system grows and there are more CSV files to process, your nightly process starts running behind schedule.

You realize that a lot of the time spent in your nightly worker is spent formatting the CSV file into the correct format.  It would make sense to split this process into two distinct stages, formatting and processing.  When a CSV file is received, you could send a message to your push queue which in turn will kick off a “formatting” worker job to pre-process the CSV file into the correct format. Your nightly “processing” worker job will then be able to fly through the CSV files because it no longer needs to fix any formatting issues.

The beauty here is that you can continue to add more push events to the queue.  When a file is uploaded, maybe you also need to ping another worker that handles OCR or post an update to an external HTTP endpoint letting it know exactly “what” file was uploaded.  Without a push queue, you’d be adding a lot of custom code to handle these requests, retries, errors, etc.  IronMQ’s push queues take care of all of this for you.

How can I configure a Push Queue?

Retries
You can configure your queue to allow for a custom amount of retries, a custom delay between retries, and even provide another queue to store failed push attempts. For example, using an HTTP event, MQ will retry pushes (3 times by default) every time it receives a non-200 response code.

Timeouts
If your event never receives a response after a certain period of time (10 seconds by default), it will chalk that up as a failed attempt and retry.

Unicast or Multicast?
You can even fire off multiple events from one queue. If you need to trigger one HTTP endpoint and also fire off a Worker job, that’s not a problem.

How do I create a push queue?

Creating one is straightforward.  Here’s an example cURL request that creates a multicast Push Queue with an HTTP endpoint as well as a Worker endpoint.

IronMQ has client libraries available in most languages, so you can easily create one programmatically as well.  Here’s an example in PHP:

Conclusion

With one IronMQ Push Queue, you can make a lot happen. If you were to try and replicate a multicast Push Queue in a traditional message queue, for example, you’d end up writing a lot of custom code to glue everything together. You’d also have to deal with scaling your infrastructure as your message queue needs grew. With IronMQ, you can save time and money focusing on your applications business logic, and less time on glue and infrastructure. For more detailed information about Push Queues, visit the IronMQ documentation.

If you’re interested in knowing more about IronMQ, or want to chat about how we may be able to help, call us anytime at 888-501-4766 or email at support@iron.io.

* These fields are required.


Top 10 Uses For A Message Queue


Geese love queues.
(Image by D.Hilgart)

We’ve been working with, building, and evangelising message queues for the last year, and it’s no secret that we think they’re awesome. We believe message queues are a vital component to any architecture or application, and here are ten reasons why:
Continue reading “Top 10 Uses For A Message Queue”

Top 10 Uses of a Worker System

A worker system is an essential part of any production-scale cloud application. The ability to run tasks asynchronously in the background, process tasks concurrently at scale, or schedule jobs to run on regular schedules is crucial for handling the types of workloads and processing demands common in a distributed application.

At Iron.io, we’re all about scaling workloads and performing work asynchronously and we hear from our customers on a continuous basis. Almost every customer has a story on how they use the IronWorker platform to get greater agility, eliminate complexity, or just get things done. We wanted to share a number of these examples so that other developers have answers to the simple question “How do I use a worker system?” or “What can I do with a task queue?” Continue reading “Top 10 Uses of a Worker System”

IronFunctions Alpha 2

Today we are excited to announce the second alpha release of IronFunctions, the language-agnostic serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop.

The initial release of IronFunctions received some amazing feedback and we’ve spent the past few months fixing many of the issues reported. Aside from fixes, the new release comes with a whole host of great new features, including:

Long(er) running containers for better performance aka Hot Functions
LRU Cache
Triggers example for OpenStack project Picasso
Initial load balancer
fn: support route headers tweaks
fn: Add rustlang support
fn: Add .NET core support
fn: Add python support

Stay tuned for the upcoming posts for insights about individual features such as the LRU, load balancer and OpenStack integrations.

What’s next?

We will be releasing a Beta with more fixes, improvements to the load balancer, and a much-anticipated new feature that will allow chaining of functions.

We’re excited to hear people’s feedback and ideas, and it’s important that we’re building something that solves real world problems so please don’t hesitate to file an issue, or join us for a chat in our channel on our Slack Team.

Thanks for all the love and support,
The Iron.io Team

Discuss on Hacker News
Join our Slack
File an Issue
Contact Iron.io about enterprise support

Announcing Project Picasso – OpenStack Functions as a Service

We are pleased to announce a new project to enable Functions as a Service (FaaS) on OpenStack — Picasso.

The mission is to provide an API for running FaaS on OpenStack, abstracting away the infrastructure layer while enabling simplicity, efficiency, and scalability for both developers and operators.

Picasso can be used to trigger functions from OpenStack services, such as Telemetry (via HTTP callback) or Swift notifications. This means no long running applications, as functions are only executed when called.

Picasso is comprised of two main components:

  • Picasso API
    • The Picasso API server uses Keystone authentication and authorization through its middleware.
  • IronFunctions
    • Picasso leverages the backend container engine provided by IronFunctions, an open-source Serverless/FaaS platform based on Docker.

Resources

 

We’ve created some initial blueprints to show what the future roadmap looks like for the project.

You can try out Picasso now on DevStack by following the quick start guide here. Let us know what you think!

If you’re interested in contributing or just have any questions, please join us on the #OpenStack channel in Slack.

Announcing IronFunctions Open Source

 

logo-black-400wToday we’re excited to announce IronFunctions, our first major open source project.

IronFunctions is a serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop. The world is moving towards hybrid/multi-cloud, so should your serverless platform.

It runs on top of the popular orchestration frameworks (Kubernetes, Mesosphere), inside PaaS runtime environments (CloudFoundry, OpenShift), and on bare metal.

Functions are packaged using Docker so it supports any language, any dependencies, and can run anywhere. It will also eventually support other container technologies, and today it supports the Lambda function format for easy portability and will soon support others as well.

IronFunctions is written in Go, extremely fast, and written with scalability and operability in mind.

Finally, it’s being driven by our team at Iron.io that is unashamedly taking credit for coining the term serverless dating back to 2011 and 2012. We’ve launched billions of containers through our flagship serverless job processing service IronWorker, and now bring this knowledge and experience to IronFunctions to round out our portfolio of products with synchronous capabilities.

So without further ado, we’d love your help in building an amazing platform and community. Fork the repo and please give us pull requests and create issues!

The Project: https://github.com/iron-io/functions

Join our Slack room: http://get.iron.io/open-slack

The Press Release: http://www.marketwired.com/press-release/ironio-releases-first-open-source-project-2175887.htm

Join the conversation: https://news.ycombinator.com/item?id=12961296

Thanks for supporting Iron.io for the past 5+ years.

Chad Arimura
CEO, Iron.io

The Overhead of Docker Run

First published on Medium on 10/11/2016.

We use Docker a lot. Like a lot, lot. While we love it for a lot of things, it still has a lot of room for improvement. One of those areas that could use improvement is the startup/teardown time of running a container.

The Test

To test the overhead of running a Docker container, I made a script that compares execution times for various docker run options vs not using Docker at all. The script that I’m running is a simple hello world shell script that consists of the following:

echo "Hello World!"

The base Docker image is the official Alpine linux image plus the script above.

4 Things to Compare

  1. As a baseline, the first measurement is sans Docker. This is just running the hello.sh script directly.
  2. The second measure is just docker run IMAGE.
  3. The third measure adds the “rm” flag to remove the container after execution.
  4. The final one is to use docker start instead of run, so we can see the effect of reusing an already created container.

Docker for Mac

Server Version: 1.12.2-rc1

Running: ./hello.sh
avg: 5.897752ms
Running: docker run treeder/hello:sh
avg: 988.098391ms
Running: docker run — rm treeder/hello:sh
avg: 999.637832ms
Running: docker start -a reuse
avg: 986.875089ms

(Note: looks like using Ubuntu as a base image is slightly faster than Alpine, in the 10–50ms range).

Docker on Ubuntu

Server Version: 1.12.1

Running: ./hello.sh
avg: 2.139666ms
Running: docker run treeder/hello:sh
avg: 391.171656ms
Running: docker run — rm treeder/hello:sh
avg: 396.385453ms
Running: docker start -a reuse
each: 340.793602ms

Results

As you can see from the results above, using Docker adds nearly a full second to the execution time of our script on Mac and ~390ms on Linux (~175x slower than running the script without Docker).

Now this may not be much of an issue if your script/application runs for a long period of time, but it is certainly an issue if you run short lived programs.

Try it yourself

Feel free to try running the script on your system and share the results! You can find everything you need here:https://github.com/treeder/dockers/tree/master/hello

Just clone that repo, cd into the hello directory and run:

go run time.go

Hybrid Iron.io – On-Premise Job Processing with the Help of the Cloud

Hybrid_IronioHybridOne of our main goals for the Iron.io platform is run anywhere. This means we enable customers to use our services on any cloud, public or private. With Hybrid Iron.io, we’re making it drop dead simple to get the benefits of the public cloud, with the security and control of a private cloud. 

Using Iron.io’s public cloud service is easy, you just sign up and start using it. No servers to deal with, no setup and no maintenance. You can be up and running with a very powerful technology in a matter of minutes. It’s a beautiful thing. Continue reading “Hybrid Iron.io – On-Premise Job Processing with the Help of the Cloud”

Four and a Half Years of Go in Production at goto Chicago 2016

GOTO_Chicago2016

Travis Reeder, CTO and co-founder of Iron.io, gave a talk at Goto Chicago 2016 discussing Iron.io’s early migration to Go, why we changed our infrastructure and the benefits it has brought to us.

One of the questions that always comes up after telling people we migrated to Go is:

“Why not Ruby?”

Continue reading “Four and a Half Years of Go in Production at goto Chicago 2016”

Introducing Lambda support on Iron.io

AWSonDocker_revised

Serverless computing has become a compelling model for companies to add business value without their development teams having to worry about provisioning, managing and scaling infrastructure. The concept is that developers write code that performs business logic based on  some specific input data, and the platform handles the details of:

  • Where to run it: Use some machine with available capacity in its pool
  • When to run it: Either event-driven or scheduled
  • How to run it: Decouple your developers from your runtime. You do not have to be concerned about whether your program is running on bare metal, in a VM or in some sort of container

Continue reading “Introducing Lambda support on Iron.io”