We’ve been working with, building, and evangelising message queues for the last year, and it’s no secret that we think they’re awesome. We believe message queues are a vital component to any architecture or application, and here are ten reasons why: Continue reading “Top 10 Uses For A Message Queue”
A worker system is an essential part of any production-scale cloud application. The ability to run tasks asynchronously in the background, process tasks concurrently at scale, or schedule jobs to run on regular schedules is crucial for handling the types of workloads and processing demands common in a distributed application.
At Iron.io, we’re all about scaling workloads and performing work asynchronously and we hear from our customers on a continuous basis. Almost every customer has a story on how they use the IronWorker platform to get greater agility, eliminate complexity, or just get things done. We wanted to share a number of these examples so that other developers have answers to the simple question “How do I use a worker system?” or “What can I do with a task queue?” Continue reading “Top 10 Uses of a Worker System”
Today we are excited to announce the second alpha release of IronFunctions, the language-agnostic serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop.
The initial release of IronFunctions received some amazing feedback and we’ve spent the past few months fixing many of the issues reported. Aside from fixes, the new release comes with a whole host of great new features, including:
Stay tuned for the upcoming posts for insights about individual features such as the LRU, load balancer and OpenStack integrations.
We will be releasing a Beta with more fixes, improvements to the load balancer, and a much-anticipated new feature that will allow chaining of functions.
We’re excited to hear people’s feedback and ideas, and it’s important that we’re building something that solves real world problems so please don’t hesitate to file an issue, or join us for a chat in our channel on our Slack Team.
Thanks for all the love and support,
The Iron.io Team
IronFunctions is a serverless application platform. Unlike AWS Lambda it’s open-source, can run on any cloud — public, on-premise, or hybrid, and language agnostic, while maintaining AWS Lambda compatibility.
The initial release of IronFunctions received some amazing feedback and the past few weeks were spent addressing outstanding issues. In this post I will be highlighting the biggest feature with the upcoming release, Hot Functions.
Hot Functions improves IronFunctions throughput by 8x (depending on duration of task). By re-using containers or what we call Hot Functions each call is reduced by 300ms.
Before Hot Functions, IronFunctions would spin up a new container to handle every job. This led to a 300ms overhead per job due to container startup time.
With Hot Functions, long-lived containers are able to serve the same type of task without incurring the startup time penalty. They do this by taking incoming workloads and feeding in through standard input and writing to standard output. In addition, permanent network connections are reused. For more information on implementing Hot Functions, see the Github docs.
We ran our benchmark on a 1 GB Digital Ocean instance and used honeycomb.io to plot the results.
Simple function printing “Hello World” called for 10s (MAX CONCURRENCY = 1).
Hot Functions have 162x higher throughput.
Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1).
Hot Functions have 139x higher throughput.
By combining Hot Functions with concurrency we saw even better results:
Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7).
Hot Functions have 7.84x higher throughput.
There’s more to this release as well. IronFunctions brings Single Flight pattern for DB calls as well as stability and optimization fixes across the board.
IronFunctions is maturing quickly and our community is growing. To get involved, please join our Slack community and check out IronFunctions today!
Also stay tuned for upcoming announcements by following this blog and our developer blog.
We are pleased to announce a new project to enable Functions as a Service (FaaS) on OpenStack — Picasso.
The mission is to provide an API for running FaaS on OpenStack, abstracting away the infrastructure layer while enabling simplicity, efficiency, and scalability for both developers and operators.
Picasso can be used to trigger functions from OpenStack services, such as Telemetry (via HTTP callback) or Swift notifications. This means no long running applications, as functions are only executed when called.
Picasso is comprised of two main components:
The Picasso API server uses Keystone authentication and authorization through its middleware.
Picasso leverages the backend container engine provided by IronFunctions, an open-source Serverless/FaaS platform based on Docker.
Today we’re excited to announce IronFunctions, our first major open source project.
IronFunctions is a serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop. The world is moving towards hybrid/multi-cloud, so should your serverless platform.
Functions are packaged using Docker so it supports any language, any dependencies, and can run anywhere. It will also eventually support other container technologies, and today it supports the Lambda function format for easy portability and will soon support others as well.
IronFunctions is written in Go, extremely fast, and written with scalability and operability in mind.
Finally, it’s being driven by our team at Iron.io that is unashamedly taking credit for coining the term serverless dating back to 2011 and 2012. We’ve launched billions of containers through our flagship serverless job processing service IronWorker, and now bring this knowledge and experience to IronFunctions to round out our portfolio of products with synchronous capabilities.
So without further ado, we’d love your help in building an amazing platform and community. Fork the repo and please give us pull requests and create issues!
We use Docker a lot. Like a lot, lot. While we love it for a lot of things, it still has a lot of room for improvement. One of those areas that could use improvement is the startup/teardown time of running a container.
To test the overhead of running a Docker container, I made a script that compares execution times for various docker run options vs not using Docker at all. The script that I’m running is a simple hello world shell script that consists of the following:
It’s 3/14, and that means it’s international Pi day! A day where we rejoice over the transcendental number that seems to be everywhere.
So, why am I writing about pi on the Iron.io blog? It turns out pi is the best (read: the absolute best!) way to test out computers. It’s sufficiently random, requires large amounts of memory, CPU, and is easy to check.
“I have no interest as a hobby for extending the known value of pi itself. I have a major interest for improving the performance of computation. [..] Mathematical constants like the square root of 2, e, and gamma are some of the candidates, but pi is the most effective.”
How To Make Pi
I’m on board! I want to make Pi, myself. If Pi is a great way to test any computer, why not use it to test first-class distributed computing solutions, like IronWorker?
Humans have known about Pi for a while. Which is part of what makes it a great computation. We have multiple recipes for baking the same dish. That means it’s easy to check our work (by comparing two algorithms).
Docker enables you to package up your application along with all of the application’s dependencies into a nice self-contained image. You can then use use that image to run your application in containers. The problem is you usually package up a lot more than what you need so you end up with a huge image and therefore huge containers. Most people who start using Docker will use Docker’s official repositories for their language of choice, but unfortunately if you use them, you’ll end up with images the size of the empire state building when you could be building images the size of a bird house. You simply don’t need all of the cruft that comes along with those images. If you build a Node image for your application using the official Node image, it will be a minimum of 643 MB because that’s the size of the official Node image.
I created a simple Hello World Node app and built it on top of the official Node image and it weighs in at 644MB.
That’s huge! My app is less than 1 MB with dependencies and the Node.js runtime is ~20MB, what’s taking up the other ~620 MB?? We must be able to do better.
What is a Microcontainer?
A Microcontainer contains only the OS libraries and language dependencies required to run an application and the application itself. Nothing more.
Rather than starting with everything but the kitchen sink, start with the bare minimum and add dependencies on an as needed basis.
Taking the exact same Node app above, using a really small base image and installing just the essentials, namely Node.js and its dependencies, it comes out to 29MB. A full 22 times smaller!
Try running both of those yourself right now if you’d like, docker run –rm -p 8080:8080 treeder/tiny-node:fat, then docker run –rm -p 8080:8080 treeder/tiny-node:latest. Exact same app, vastly different sizes.
Thanks to JD Hancock for the base image! CC BY 2.0
Anyone who’s ever done ETL knows it can get seriously funky. When I first started working on ETL, I was parsing data for a real estate company. Every once in awhile roofing data would appear in the pool field. “Shingles” isn’t a compelling feature for swimming pools. Go figure.
Thankfully, Node.js gives us a lingua franca for sharing cool solutions. A search for data validation shows there are more than a few options. For ETL, let’s take a look at just one of those options.