Amazon SQS (Simple Queue Service): Overview and Tutorial

What’s a Queue?  What’s Amazon SQS?

queue
Now that’s quite a queue!

Queues are a powerful way of combining software architectures. They allow for asynchronous communication between different systems, and are especially useful when the throughput of the systems is unequal.   Amazon offers their version of queues with Amazon SQS (Simple Queue Service).

For example, if you have something like:

  • System A – produces messages periodically in huge bursts
  • System B – consumes messages constantly, at a slower pace

With this architecture, a queue would allow System A to produce messages as fast as it can, and System B to slowly digest the messages at it’s own pace.

Queues have played an integral role in software architecture for decades along with core technology concepts like APIs (Application Programming Interfaces) and ETL/ELT (Extract, Load Transform). With the recent trend towards microservices, have become more important than ever.

Amazon Web Services

AWS (Amazon Web Services) is one of the leading cloud providers in the world, and anyone writing software is probably familiar with them. AWS offers a wide variety of “simple” services that traditionally had to be implemented in-house (eg, storage, database, computing, etc). The advantages offered by cloud providers are numerous, and include:

  • Better scalability – your data center is a drop in their ocean. They’ve got mind-boggling capacity. And it’s spread around the world.
  • Better reliability – they hire the smartest people in the world (oodles of them) to ensure these services work correctly, all the time.
  • Better performance – you can typically harness as much computing horsepower as you’d like with cloud providers, far exceeding what you could build in-house.
  • Better (lower) cost – nowadays, they can usually do all this cheaper than you could in your own data center, especially when you account for all the expertise they bring to the table. And many of these services employ a “pay as you go” model, charging for usage as it occurs. So you don’t have to pay the large up front cost for licenses, servers, etc.
  • Better security – their systems are always up to date with the latest patches, and all their smart brainiacs are also thinking about how to protect their systems.

If you have to choose between building out your own infrastructure, or going with something in the cloud, it’s usually an easy decision.

AWS Simple Queue Service

It comes as no surprise that AWS also offers a queueing service, simply named AWS Simple Queue Service. It touts all the cloud benefits mentioned before, and also features:

  • Automatic scaling – if your volume grows you never have to give a thought to your queuing architecture. AWS takes care of it under the covers.
  • Infinite scaling – while there probably is some sort of theoretical limit here (how many atoms are in the universe?), AWS claims to support any level of traffic.
  • Server side encryption – using AWS SSE (Server Side Encryption), messages can remain secure throughout their lifetime on the queues.

Their documentation is also top-notch. It’s straightforward to get started playing with the technology, and when you’re ready for serious, intricate detail, the documentation goes deep enough to get you there.

Example

Let’s walk through a simple example of using AWS SQS, using the line at the DMV (Department of Motor Vehicles) as the example subject matter. The DMV is notorious for long waits, forcing people to corral themselves into some form of a line. While this isn’t an actual use case anyone would (presumably) solve using AWS SQS, it will allow us to quickly demo their capabilities, with a real-world situation most are all too familiar with.

While AWS SQS has SDK libraries for almost any language you may want to use, I’ll be using their REST interface for this exercise (with my trusted REST side kick Postman!).

Authorization

Postman makes it easy to setup all the necessary authorization using Collections. Configure the AWS authorization in the parent collection with the Access Key and Secret Access Key found in the AWS Console:

AWS SWS Authorization

Then reference that authorization in each request:

AWS SQS Create Parent Auth

Using this pattern, it’s easy to quickly spin up requests and put AWS SQS through its paces.

Creating a Queue

When people first walk in the door, any DMV worth their salt will give them a number to begin the arduous process. This is your main form of identification for the next few minutes/hours (depending on that day’s “volume”), and it’s how the DMV employees think of you (“Number 14 over there sure seems a bit testy!”).

Let’s create our “main queue” now, with the following REST invocation:

Request:

GET https://sqs.us-east-1.amazonaws.com?Action=CreateQueue&DefaultVisibilityTimeout=0&QueueName=MainLine&Version=2012-11-05

Response:

https://sqs.us-east-1.amazonaws.com/612055710376/MainLine

fa178e12-3178-5318-8d90-da20904943f0

Good deal. Now we’ve got a mechanism to track people as they come through the door.

Standard vs FIFO

One important detail that should be mentioned – there are two types of queues within AWS SQS:

  • Standard – higher throughput, with “at least once delivery”, and “best effort ordering”.
  • FIFO (First-In-First-Out) – not as high throughput, but guarantees on “exactly once” processing, and preserving ordering of messages.

Long story short, if you need things super fast, can tolerate messages out of order, and possibly sent more than once, Standard queues are the answer. If you need absolute guarantees on order of operations, no duplication of work, and don’t have huge throughput needs, then FIFO queues are the best choice.

We’d better make sure we create our MainLine queue using FIFO! While a “mostly in order” guarantee might suffice in some situations, you’d have a riot on your hands at the DMV if people started getting called out of order. Purses swinging, hair pulling – it wouldn’t be pretty. Let’s add “FifoQueue=true” to the query string to indicate that the queue should be FIFO:

Request

https://sqs.us-east-1.amazonaws.com?Action=CreateQueue&DefaultVisibilityTimeout=0&QueueName=MainLineFIFO&Version=2012-11-05&FifoQueue=true

Send Message

Now that we’ve got a queue, let’s start adding “people” to it, using the “SendMessage” action. Note that when using REST, we need to URL encode the payload. So something like this:

{
"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"
}

Becomes this:

%7B%0A%20%20%20%22name%22%3A%20%22Ronnie%20Van%20Zandt%22%2C%0A%20%20%20%22drivers_license_number%22%3A%20%221234%22%0A%7D

There are many ways of accomplishing this, I find the urlencoder site to be easy and painless.

Here’s the final result:

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Version=2012-11-05&Action=SendMessage&MessageBody=%7B%0A%20%20%20%22name%22%3A%20%22Ronnie%20Van%20Zandt%22%2C%0A%20%20%20%22drivers_license_number%22%3A%20%221234%22%0A%7D

Response:

00ad4e10-4394-450f-8902-4a9cf4b96b95

b9f28edc9c6dc9fe2a86f5ae8efb2364

97a41dd4-5d15-59e0-b9f5-49e02fb4384d

After this call, we’ve got young Ronnie standing in line at the DMV. Thanks to AWS’s massive scale and performance, we can leave Ronnie there as long as we’d like. And we can add as many people as we’d like – with AWS SQS’s capacity, we could have a line around the world. But that’s horrible customer service, someone needs to find out what Ronnie needs!

ReceiveMessage

At the DMVs I’ve been to, there’s usually a large electronic sign on the counter that will display the next lucky person’s number. You feel a brief pulse of joy when your number displays, and rush to the counter on a pillow of euphoria, eager to get on with your life. How do we recreate this experience in AWS SQS?

Why, “ReceiveMessage”, of course! (Note we are invoking it using the actual QueueUrl passed back by the CreateQueue call above)

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Action=ReceiveMessage&Version=2012-11-05

Response

00ad4e10-4394-450f-8902-4a9cf4b96b95

AQEBjq8apWDfLXE0pCbpABh6Wdx70ZbszY0k38t9u8Mrny1Jz+Q522Vwvvf4xLqzQHfjoHQd56JJJEM67LJG5tQ/YSCibFSNCg8jfadyNMbqBH48/WxmpYunI3w1+GbDCL2tlKkDz/Lm9akGasgDZEBtw6U9jw1Bu6XbzNuNiw5jfVzjC99E38KSvxvZMHfmSi3Wo2XOBAcfU0oTpLmGMwccGiRUOp4XtS38nMXHhBdtKSS+U11N38cJAtlnxHQJkXmTAk7ZdvpxJNtnOrXmeGN00vtf6OSyLJzRJJieYHNtxIyxojcGZcnJQ6dTveMWQ1A1FOzschRuavl3wtftDS/YSt5sDNeBcjEOE+Y0QE+18qiWaDZc+nlaetcBvqmt6Hbt

b9f28edc9c6dc9fe2a86f5ae8efb2364

{
"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"
}

6a43b589-940c-52a4-bc62-e1bde75e22e4

One thing to keep in mind – ReceiveMessage doesn’t actually REMOVE the item from the queue – the item will remain there until explicitly removed. Visibility Timeout can be used to ensure multiple readers don’t attempt to process the same message.

So how do we permanently mark the item as “processed”? By deleting it from the queue!

DeleteMessage

The DeleteMessage action is what removes items from a queue. There’s not really a good analogy with the DMV here (thankfully, DMV employees can’t “delete” us), so we’ll just go with an example. DeleteMessage takes the ReceiptHandle returned by the ReceiveMessage endpoint as a parameter (once again, encoded):

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Action=DeleteMessage&Version=2012-11-05&ReceiptHandle=AQEBjq8apWDfLXE0pCbpABh6Wdx70ZbszY0k38t9u8Mrny1Jz%2BQ522Vwvvf4xLqzQHfjoHQd56JJJEM67LJG5tQ%2FYSCibFSNCg8jfadyNMbqBH48%2FWxmpYunI3w1%2BGbDCL2tlKkDz%2FLm9akGasgDZEBtw6U9jw1Bu6XbzNuNiw5jfVzjC99E38KSvxvZMHfmSi3Wo2XOBAcfU0oTpLmGMwccGiRUOp4XtS38nMXHhBdtKSS%2BU11N38cJAtlnxHQJkXmTAk7ZdvpxJNtnOrXmeGN00vtf6OSyLJzRJJieYHNtxIyxojcGZcnJQ6dTveMWQ1A1FOzschRuavl3wtftDS%2FYSt5sDNeBcjEOE%2BY0QE%2B18qiWaDZc%2BnlaetcBvqmt6Hbt

Response

a69c7042-d0e2-546a-bdf7-2476a30b89df

And just like that, Ronnie is able to leave the DMV with his newly printed license, all thanks to AWS SQS!

DMV line
It’s time to get out of here!

IronMQ vs AWS SQS

While AWS SQS has many strengths, there are advantages to using Iron MQ that make it a more compelling choice, including:

Client Libraries

Iron MQ features an extensive set of client libraries, with clear, straightforward documentation . Getting started with Iron MQ is a breeze. After playing with both SDKs, I found the Iron MQ experience to be easier.

Speed

Iron MQ is much faster than SQS, with V3 making it faster and more powerful than ever before. And with high volume systems, bottlenecks in your messaging architecture can bring the whole system to its knees. Faster is better, and Iron MQ delivers in this area.

Push Queues

Iron MQ offers something called Push Queues, which supercharge your queueing infrastructure with the ability to push messages OUT. So rather than relying solely on services pulling messages off queues, this allows your queues to proactive send the messages to designated endpoints, recipients, etc. This powerful feature expands the communication options between systems, resulting in faster workflow completion, and more flexible architectures.

Features

Check out the comparison matrix between Iron MQ and its competitors (including SQS). It clearly stands out as the most feature-rich offering, with functionality not offered by SQS (or anyone else, for that matter).

In Con-q-sion

Hopefully this simple walkthrough is enough to illustrate some possibilities of using AWS SQS for your queuing needs. It is easy to use, with incredible power, and their SDK supports a variety of language. And may your next trip to the DMV be just as uneventful as young Ronnie’s.

Happy queueing!

3 Key Benefits to Container-Based Background Job Processing

Whether deploying applications or providing microservices, being able to get tasks done in the background without user intervention is key to operating efficiently for IT and development teams. One effective way to facilitate background job processing is with the help of containers.

Container-based background job processing comes with a whole host of benefits. Here are some of the key benefits of using container-based background job processing that IT and development teams can leverage.

Enhanced Security

With ever-increasing data breaches and ransomware threats, keeping applications secure during deployment is vital. Managing the deployment of applications often calls for working with several development teams distributed across different locations. Having more people work on these teams can create a higher risk of exposure and data breaches due to errors or vulnerabilities from mistakes by the staff.

The great news is that containers offer enhanced security. That’s because more effort has been put in place to safeguard containers. For instance, container systems and container management systems, such as Docker and Kubernetes, require container image signing to ensure your team is deploying containers from trusted resources.

Moreover, container scanning solutions also help enhance security by quickly identifying vulnerabilities that may exist in your containers, including the containers that were signed. This helps reduce security risks, including deploying unsafe containers.

Versatile Background Job Capabilities

Being able to provide on-time delivery to clients is essential for enhancing the customer’s experience. With the help of container-based background job processing, IT and development teams can manage a variety of background tasks.

For instance, tasks such as email delivery, automated scaling, calculating bandwidths or automating push notifications can be handled by containers. That’s because containers can fragment applications into smaller components while enabling communication among developer teams. This also helps facilitate speedy software development and testing. Moreover, using a container-based workload platform from development tool expert services, such as Iron IO, helps enterprises free up staff from handling background job processing so they can focus on more vital tasks, such as testing and developing their software applications.

Flexible Deployment

Thanks to the container’s shareability, enterprises can leverage flexible deployment options, including the shared, on-premise, dedicated or hybrid deployment options offered by a reliable container-based workload hosted platform, such as Iron IO’s Worker. That means enterprise leaders can choose a deployment option that’s customized to their needs.

For instance, development teams working in enterprises that often deal with classified or highly sensitive data or personal information, such as banks, hospitals or federal agencies, often have to follow several compliance regulations. Having the ability to use on-premise deployment solutions can help support background tasks in a secure manner.

At the same time, enterprises that must support staying in compliance with enterprise and federal rules while facilitating a distributed team may find a hybrid deployment approach more feasible. This deployment option is ideal for handling secure background job processing for tasks, such as scheduling and authentication, while letting development teams run their containers on-premise.

Final Thoughts

From flexible deployment options to versatile background task processing capabilities, containers offer much for development teams to leverage. While containers provide several benefits, it’s important to also use reputable platforms and professional teams that have the experience and expertise in managing and implementing containers to support container-based background jobs.  By leveraging containers and the platforms that support them, enterprises can better serve their clients for an enhanced customer experience.

Iron’s East/West Coast Drink-up

A bunch of Iron employees will be out and about in April, looking to meet up with customers to chat about our up and coming platform changes.  Beer (or wine, or cocktails, or <insert drink here>) will be on us! We’re sticking to the east and west coasts for now, and our current plans are:

April 5th,    San Francisco
April 14th,  Boston
April 15th,  NYC
April 17th,  Los Angeles

If you’re interested in attending, fill out the form below.  We’ll be in touch with the details once we have them confirmed on our end.  Cheers!

A Serverless Message Queue Without the Glue

More and more technologies get involved as systems grow, and it’s sometimes hard to keep track of what’s doing what. Caching layers, message queues, serverless functions, tracing frameworks… the list goes on.  Once you start sprinkling in public cloud services, you may find yourself developing your way into vendor lock-in.  All of the sudden, you’re dealing with one cloud, tons of services, and having to glue everything together in order to make the services talk to each other.  One of Iron’s primary goals is to make life easier for developers, and IronMQ’s little known “Push Queue” feature is one that can help prevent you from having to write the glue.

What are Push Queues?

IronMQ has a built-in feature called Push Queues, which when enabled, fire off an event any time a message gets pushed onto that queue. This comes in extremely handy when you immediately want to “do something” (or many things) with that message. With traditional message queues, you’d usually need to write another process that polls your queues for messages at a given duration. MQ’s push queues can fire off events to different types of endpoints, each extremely helpful in their own way.

What Type of events can be triggered?

HTTP
When a message gets put onto your push queue, IronMQ can make a POST request (with the message in the request body) to any URL of your choice. This is extremely handy when you want to notify other systems that some sort of event just happened or kick off another process.

MQ
Inception! You can have the delivery of a message populate another IronMQ queue. This is helpful if you want to tie multiple queues together or create a dead letter queue for example.

Worker
MQ can connect directly to IronWorker and pass its message as the payload to one of your jobs. How cool is that!?  In order to exemplify how cool that actually is, we’ll run through a real-life scenario.

MQ & Worker Example

Let’s say you have a time-sensitive nightly job that processes uploaded CSV files.  It needs to process all of the files uploaded during that day and finish within a set amount of time.   As your system grows and there are more CSV files to process, your nightly process starts running behind schedule.

You realize that a lot of the time spent in your nightly worker is spent formatting the CSV file into the correct format.  It would make sense to split this process into two distinct stages, formatting and processing.  When a CSV file is received, you could send a message to your push queue which in turn will kick off a “formatting” worker job to pre-process the CSV file into the correct format. Your nightly “processing” worker job will then be able to fly through the CSV files because it no longer needs to fix any formatting issues.

The beauty here is that you can continue to add more push events to the queue.  When a file is uploaded, maybe you also need to ping another worker that handles OCR or post an update to an external HTTP endpoint letting it know exactly “what” file was uploaded.  Without a push queue, you’d be adding a lot of custom code to handle these requests, retries, errors, etc.  IronMQ’s push queues take care of all of this for you.

How can I configure a Push Queue?

Retries
You can configure your queue to allow for a custom amount of retries, a custom delay between retries, and even provide another queue to store failed push attempts. For example, using an HTTP event, MQ will retry pushes (3 times by default) every time it receives a non-200 response code.

Timeouts
If your event never receives a response after a certain period of time (10 seconds by default), it will chalk that up as a failed attempt and retry.

Unicast or Multicast?
You can even fire off multiple events from one queue. If you need to trigger one HTTP endpoint and also fire off a Worker job, that’s not a problem.

How do I create a push queue?

Creating one is straightforward.  Here’s an example cURL request that creates a multicast Push Queue with an HTTP endpoint as well as a Worker endpoint.

IronMQ has client libraries available in most languages, so you can easily create one programmatically as well.  Here’s an example in PHP:

Conclusion

With one IronMQ Push Queue, you can make a lot happen. If you were to try and replicate a multicast Push Queue in a traditional message queue, for example, you’d end up writing a lot of custom code to glue everything together. You’d also have to deal with scaling your infrastructure as your message queue needs grew. With IronMQ, you can save time and money focusing on your applications business logic, and less time on glue and infrastructure. For more detailed information about Push Queues, visit the IronMQ documentation.

If you’re interested in knowing more about IronMQ, or want to chat about how we may be able to help, call us anytime at 888-501-4766 or email at support@iron.io.

* These fields are required.


IronFunctions Alpha 2

Today we are excited to announce the second alpha release of IronFunctions, the language-agnostic serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop.

The initial release of IronFunctions received some amazing feedback and we’ve spent the past few months fixing many of the issues reported. Aside from fixes, the new release comes with a whole host of great new features, including:

Long(er) running containers for better performance aka Hot Functions
LRU Cache
Triggers example for OpenStack project Picasso
Initial load balancer
fn: support route headers tweaks
fn: Add rustlang support
fn: Add .NET core support
fn: Add python support

Stay tuned for the upcoming posts for insights about individual features such as the LRU, load balancer and OpenStack integrations.

What’s next?

We will be releasing a Beta with more fixes, improvements to the load balancer, and a much-anticipated new feature that will allow chaining of functions.

We’re excited to hear people’s feedback and ideas, and it’s important that we’re building something that solves real world problems so please don’t hesitate to file an issue, or join us for a chat in our channel on our Slack Team.

Thanks for all the love and support,
The Iron.io Team

Discuss on Hacker News
Join our Slack
File an Issue
Contact Iron.io about enterprise support

Announcing Hot Functions for IronFunctions

IronFunctions is a serverless application platform. Unlike AWS Lambda it’s open-source, can run on any cloud — public, on-premise, or hybrid, and language agnostic, while maintaining AWS Lambda compatibility.

The initial release of IronFunctions received some amazing feedback and the past few weeks were spent addressing outstanding issues. In this post I will be highlighting the biggest feature with the upcoming release, Hot Functions.

TL;DR:

Hot Functions improves IronFunctions throughput by 8x (depending on duration of task). By re-using containers or what we call Hot Functions each call is reduced by 300ms.

Details:

Before Hot Functions, IronFunctions would spin up a new container to handle every job. This led to a 300ms overhead per job due to container startup time.

With Hot Functions, long-lived containers are able to serve the same type of task without incurring the startup time penalty. They do this by taking incoming workloads and feeding in through standard input and writing to standard output. In addition, permanent network connections are reused. For more information on implementing Hot Functions, see the Github docs.

We ran our benchmark on a 1 GB Digital Ocean instance and used honeycomb.io to plot the results.


Simple function printing “Hello World” called for 10s (MAX CONCURRENCY = 1).

Hot Functions have 162x higher throughput.


Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1).

Hot Functions have 139x higher throughput.


By combining Hot Functions with concurrency we saw even better results: 

Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7).

Hot Functions have 7.84x higher throughput.


There’s more to this release as well. IronFunctions brings Single Flight pattern for DB calls as well as stability and optimization fixes across the board.

IronFunctions is maturing quickly and our community is growing. To get involved, please join our Slack community and check out IronFunctions today!

Also stay tuned for upcoming announcements by following this blog and our developer blog.

Hacker News conversation here.

Announcing Project Picasso – OpenStack Functions as a Service

We are pleased to announce a new project to enable Functions as a Service (FaaS) on OpenStack — Picasso.

The mission is to provide an API for running FaaS on OpenStack, abstracting away the infrastructure layer while enabling simplicity, efficiency, and scalability for both developers and operators.

Picasso can be used to trigger functions from OpenStack services, such as Telemetry (via HTTP callback) or Swift notifications. This means no long running applications, as functions are only executed when called.

Picasso is comprised of two main components:

  • Picasso API
    • The Picasso API server uses Keystone authentication and authorization through its middleware.
  • IronFunctions
    • Picasso leverages the backend container engine provided by IronFunctions, an open-source Serverless/FaaS platform based on Docker.

Resources

 

We’ve created some initial blueprints to show what the future roadmap looks like for the project.

You can try out Picasso now on DevStack by following the quick start guide here. Let us know what you think!

If you’re interested in contributing or just have any questions, please join us on the #OpenStack channel in Slack.

Announcing IronFunctions Open Source

 

logo-black-400wToday we’re excited to announce IronFunctions, our first major open source project.

IronFunctions is a serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop. The world is moving towards hybrid/multi-cloud, so should your serverless platform.

It runs on top of the popular orchestration frameworks (Kubernetes, Mesosphere), inside PaaS runtime environments (CloudFoundry, OpenShift), and on bare metal.

Functions are packaged using Docker so it supports any language, any dependencies, and can run anywhere. It will also eventually support other container technologies, and today it supports the Lambda function format for easy portability and will soon support others as well.

IronFunctions is written in Go, extremely fast, and written with scalability and operability in mind.

Finally, it’s being driven by our team at Iron.io that is unashamedly taking credit for coining the term serverless dating back to 2011 and 2012. We’ve launched billions of containers through our flagship serverless job processing service IronWorker, and now bring this knowledge and experience to IronFunctions to round out our portfolio of products with synchronous capabilities.

So without further ado, we’d love your help in building an amazing platform and community. Fork the repo and please give us pull requests and create issues!

The Project: https://github.com/iron-io/functions

Join our Slack room: http://get.iron.io/open-slack

The Press Release: http://www.marketwired.com/press-release/ironio-releases-first-open-source-project-2175887.htm

Join the conversation: https://news.ycombinator.com/item?id=12961296

Thanks for supporting Iron.io for the past 5+ years.

Chad Arimura
CEO, Iron.io

The Overhead of Docker Run

First published on Medium on 10/11/2016.

We use Docker a lot. Like a lot, lot. While we love it for a lot of things, it still has a lot of room for improvement. One of those areas that could use improvement is the startup/teardown time of running a container.

The Test

To test the overhead of running a Docker container, I made a script that compares execution times for various docker run options vs not using Docker at all. The script that I’m running is a simple hello world shell script that consists of the following:

echo "Hello World!"

The base Docker image is the official Alpine linux image plus the script above.

4 Things to Compare

  1. As a baseline, the first measurement is sans Docker. This is just running the hello.sh script directly.
  2. The second measure is just docker run IMAGE.
  3. The third measure adds the “rm” flag to remove the container after execution.
  4. The final one is to use docker start instead of run, so we can see the effect of reusing an already created container.

Docker for Mac

Server Version: 1.12.2-rc1

Running: ./hello.sh
avg: 5.897752ms
Running: docker run treeder/hello:sh
avg: 988.098391ms
Running: docker run — rm treeder/hello:sh
avg: 999.637832ms
Running: docker start -a reuse
avg: 986.875089ms

(Note: looks like using Ubuntu as a base image is slightly faster than Alpine, in the 10–50ms range).

Docker on Ubuntu

Server Version: 1.12.1

Running: ./hello.sh
avg: 2.139666ms
Running: docker run treeder/hello:sh
avg: 391.171656ms
Running: docker run — rm treeder/hello:sh
avg: 396.385453ms
Running: docker start -a reuse
each: 340.793602ms

Results

As you can see from the results above, using Docker adds nearly a full second to the execution time of our script on Mac and ~390ms on Linux (~175x slower than running the script without Docker).

Now this may not be much of an issue if your script/application runs for a long period of time, but it is certainly an issue if you run short lived programs.

Try it yourself

Feel free to try running the script on your system and share the results! You can find everything you need here:https://github.com/treeder/dockers/tree/master/hello

Just clone that repo, cd into the hello directory and run:

go run time.go

Delivering on the Promise of Multicloud Lambda-like Functionality

multicloud-takeoff

In February, we launch a beta called Project Kratos. It promised to bring Lambda-like functionality to any cloud – public, private, hybrid or on-premises. As we quickly approach Q4, February seems like a long time ago, but so much has happened since then.

Over the past seven months, serverless computing has gained momentum as more than just the hot topic of the moment. Because it allows enterprises to build and deploy applications and services at scale on flexible platforms that abstract away physical infrastructure, it’s quickly becoming a must have for the modern enterprise. It will soon be a competitive advantage for those already implementing it.

Our journey with serverless has also moved from a project announcement full of promises to the solution that is widely available today.  First, in April, we announced the general availability of its multicloud solution. Since then, we’ve systematically partnered with leading cloud providers to support multicloud development.

In April, Iron.io announced its partnership with Mirantis to bring event-driven, serverless functionality to the OpenStack community. The joint solution enables enterprise developers using OpenStack to deliver applications and services faster through the serverless experience provided by Iron.io.

In May, Iron.io announced its collaboration with Cloud Foundry Foundation, home of the industry-standard multi-cloud platform, to integrate the Iron.io API with the Cloud Foundry platform.

In June, Iron.io brought the serverless experience to Red Hat OpenShift — a pairing that provided users with an end-to-end environment for building and deploying applications at scale, without the headaches of complex operations.

And in August, Iron.io announced its strategic partnership with Mesosphere, enabling microservices and serverless computing for modern data centers. Joint customers using Mesosphere’s Data Center Operating System (DC/OS) with Iron.io could experience enhanced flexibility to develop their hybrid cloud strategy and run distributed job processing across heterogeneous environments.

Yesterday, we added an announcement that serverless functionality is now available on Cloud Foundry and Iron.io supports Diego as a runtime for Iron.io workloads. Iron.io is now able to be deployed on top of Cloud Foundry, run inside of Cloud Foundry, and scale out Cloud Foundry containers.

Wow. I was here for all of it and it still seems like a lot, but it’s only the beginning. The Iron.io team is committed to bringing a serverless experiences to developers and companies far and wide.

If you want information on how we define serverless and why the world is moving this way, check out Chad Arimura’s presentation Best Practices for Implementing Serverless Architecture from the O’Reilly Software Architect conference or Dave Nugent and Ivan Dywer’s great Fireside Chat about serverless computing.