Top 10 Uses For A Message Queue


Geese love queues.
(Image by D.Hilgart)

We’ve been working with, building, and evangelising message queues for the last year, and it’s no secret that we think they’re awesome. We believe message queues are a vital component to any architecture or application, and here are ten reasons why:
Continue reading “Top 10 Uses For A Message Queue”

Top 10 Uses of a Worker System

A worker system is an essential part of any production-scale cloud application. The ability to run tasks asynchronously in the background, process tasks concurrently at scale, or schedule jobs to run on regular schedules is crucial for handling the types of workloads and processing demands common in a distributed application.

At Iron.io, we’re all about scaling workloads and performing work asynchronously and we hear from our customers on a continuous basis. Almost every customer has a story on how they use the IronWorker platform to get greater agility, eliminate complexity, or just get things done. We wanted to share a number of these examples so that other developers have answers to the simple question “How do I use a worker system?” or “What can I do with a task queue?” Continue reading “Top 10 Uses of a Worker System”

IronFunctions Alpha 2

Today we are excited to announce the second alpha release of IronFunctions, the language-agnostic serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop.

The initial release of IronFunctions received some amazing feedback and we’ve spent the past few months fixing many of the issues reported. Aside from fixes, the new release comes with a whole host of great new features, including:

Long(er) running containers for better performance aka Hot Functions
LRU Cache
Triggers example for OpenStack project Picasso
Initial load balancer
fn: support route headers tweaks
fn: Add rustlang support
fn: Add .NET core support
fn: Add python support

Stay tuned for the upcoming posts for insights about individual features such as the LRU, load balancer and OpenStack integrations.

What’s next?

We will be releasing a Beta with more fixes, improvements to the load balancer, and a much-anticipated new feature that will allow chaining of functions.

We’re excited to hear people’s feedback and ideas, and it’s important that we’re building something that solves real world problems so please don’t hesitate to file an issue, or join us for a chat in our channel on our Slack Team.

Thanks for all the love and support,
The Iron.io Team

Discuss on Hacker News
Join our Slack
File an Issue
Contact Iron.io about enterprise support

Announcing Project Picasso – OpenStack Functions as a Service

We are pleased to announce a new project to enable Functions as a Service (FaaS) on OpenStack — Picasso.

The mission is to provide an API for running FaaS on OpenStack, abstracting away the infrastructure layer while enabling simplicity, efficiency, and scalability for both developers and operators.

Picasso can be used to trigger functions from OpenStack services, such as Telemetry (via HTTP callback) or Swift notifications. This means no long running applications, as functions are only executed when called.

Picasso is comprised of two main components:

  • Picasso API
    • The Picasso API server uses Keystone authentication and authorization through its middleware.
  • IronFunctions
    • Picasso leverages the backend container engine provided by IronFunctions, an open-source Serverless/FaaS platform based on Docker.

Resources

 

We’ve created some initial blueprints to show what the future roadmap looks like for the project.

You can try out Picasso now on DevStack by following the quick start guide here. Let us know what you think!

If you’re interested in contributing or just have any questions, please join us on the #OpenStack channel in Slack.

Announcing IronFunctions Open Source

 

logo-black-400wToday we’re excited to announce IronFunctions, our first major open source project.

IronFunctions is a serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop. The world is moving towards hybrid/multi-cloud, so should your serverless platform.

It runs on top of the popular orchestration frameworks (Kubernetes, Mesosphere), inside PaaS runtime environments (CloudFoundry, OpenShift), and on bare metal.

Functions are packaged using Docker so it supports any language, any dependencies, and can run anywhere. It will also eventually support other container technologies, and today it supports the Lambda function format for easy portability and will soon support others as well.

IronFunctions is written in Go, extremely fast, and written with scalability and operability in mind.

Finally, it’s being driven by our team at Iron.io that is unashamedly taking credit for coining the term serverless dating back to 2011 and 2012. We’ve launched billions of containers through our flagship serverless job processing service IronWorker, and now bring this knowledge and experience to IronFunctions to round out our portfolio of products with synchronous capabilities.

So without further ado, we’d love your help in building an amazing platform and community. Fork the repo and please give us pull requests and create issues!

The Project: https://github.com/iron-io/functions

Join our Slack room: http://get.iron.io/open-slack

The Press Release: http://www.marketwired.com/press-release/ironio-releases-first-open-source-project-2175887.htm

Join the conversation: https://news.ycombinator.com/item?id=12961296

Thanks for supporting Iron.io for the past 5+ years.

Chad Arimura
CEO, Iron.io

The Overhead of Docker Run

First published on Medium on 10/11/2016.

We use Docker a lot. Like a lot, lot. While we love it for a lot of things, it still has a lot of room for improvement. One of those areas that could use improvement is the startup/teardown time of running a container.

The Test

To test the overhead of running a Docker container, I made a script that compares execution times for various docker run options vs not using Docker at all. The script that I’m running is a simple hello world shell script that consists of the following:

echo "Hello World!"

The base Docker image is the official Alpine linux image plus the script above.

4 Things to Compare

  1. As a baseline, the first measurement is sans Docker. This is just running the hello.sh script directly.
  2. The second measure is just docker run IMAGE.
  3. The third measure adds the “rm” flag to remove the container after execution.
  4. The final one is to use docker start instead of run, so we can see the effect of reusing an already created container.

Docker for Mac

Server Version: 1.12.2-rc1

Running: ./hello.sh
avg: 5.897752ms
Running: docker run treeder/hello:sh
avg: 988.098391ms
Running: docker run — rm treeder/hello:sh
avg: 999.637832ms
Running: docker start -a reuse
avg: 986.875089ms

(Note: looks like using Ubuntu as a base image is slightly faster than Alpine, in the 10–50ms range).

Docker on Ubuntu

Server Version: 1.12.1

Running: ./hello.sh
avg: 2.139666ms
Running: docker run treeder/hello:sh
avg: 391.171656ms
Running: docker run — rm treeder/hello:sh
avg: 396.385453ms
Running: docker start -a reuse
each: 340.793602ms

Results

As you can see from the results above, using Docker adds nearly a full second to the execution time of our script on Mac and ~390ms on Linux (~175x slower than running the script without Docker).

Now this may not be much of an issue if your script/application runs for a long period of time, but it is certainly an issue if you run short lived programs.

Try it yourself

Feel free to try running the script on your system and share the results! You can find everything you need here:https://github.com/treeder/dockers/tree/master/hello

Just clone that repo, cd into the hello directory and run:

go run time.go

Hybrid Iron.io – On-Premise Job Processing with the Help of the Cloud

Hybrid_IronioHybridOne of our main goals for the Iron.io platform is run anywhere. This means we enable customers to use our services on any cloud, public or private. With Hybrid Iron.io, we’re making it drop dead simple to get the benefits of the public cloud, with the security and control of a private cloud. 

Using Iron.io’s public cloud service is easy, you just sign up and start using it. No servers to deal with, no setup and no maintenance. You can be up and running with a very powerful technology in a matter of minutes. It’s a beautiful thing. Continue reading “Hybrid Iron.io – On-Premise Job Processing with the Help of the Cloud”

Four and a Half Years of Go in Production at goto Chicago 2016

GOTO_Chicago2016

Travis Reeder, CTO and co-founder of Iron.io, gave a talk at Goto Chicago 2016 discussing Iron.io’s early migration to Go, why we changed our infrastructure and the benefits it has brought to us.

One of the questions that always comes up after telling people we migrated to Go is:

“Why not Ruby?”

Continue reading “Four and a Half Years of Go in Production at goto Chicago 2016”

Introducing Lambda support on Iron.io

AWSonDocker_revised

Serverless computing has become a compelling model for companies to add business value without their development teams having to worry about provisioning, managing and scaling infrastructure. The concept is that developers write code that performs business logic based on  some specific input data, and the platform handles the details of:

  • Where to run it: Use some machine with available capacity in its pool
  • When to run it: Either event-driven or scheduled
  • How to run it: Decouple your developers from your runtime. You do not have to be concerned about whether your program is running on bare metal, in a VM or in some sort of container

Continue reading “Introducing Lambda support on Iron.io”

Best Practices and Anti-Patterns for Workers and MQs

Best Practices and anti-patterns for workers and message queues

Thanks to Ruth Hartnup for the base image! CC BY 2.0

If you’ve been programming for a while, it’s probable that someone, somewhere, has recommended the Gang of Four book.

The book dissects Object Oriented programming. It lists numerous ways of royally messing things up, but it’s claim to fame is that it also lists ways to do it right! These well tested paths to success often come with explanations for when to use them and why they’re good at avoiding common pitfalls.

These are design patterns. They’re embedded in the culture of programming, and they’re an amazing way to learn from others’ mistakes. At the outset of any project, a lot of paths are open to you. Design patterns illuminate the dark paths from the healthy, low stress approaches.

Today, we’re releasing our very own set of best practices and anti-patterns in the form of a white paper. It’s a quick read and will save you time as you tinker on your own workers and message queues. So, what do you have to lose?

Continue reading “Best Practices and Anti-Patterns for Workers and MQs”