Robotics with Iron

We recently sponsored a robotics event in Tokyo held by OnLab, DigitalGarage and Psygig… and, it was awesome.  Also, yes… the course below was an actual course from the event.

The participants were to break into teams, build a drone, implement machine learning techniques, gather and analyze data via Iron, and maneuver their drone across multiple courses.  The teams that finished the courses and displayed the most innovative technical solutions were crowned champion.



The ability to utilize GPU instances and fire up containers that run libraries like Keras and TensorFlow allows for the offloading of heavy computational workloads even in highly dynamic environments.  In the last few months, we’ve been speaking to more and more customers who are using Iron for large ML and AI workloads, often breaking them into distinct types of work units that require different levels of GPU, CPU and memory requirements.

Congratulations to the winners of the contest and all those that participated!  It looked incredibly challenging.  If you have any questions about utilizing GPU’s, machine learning, artificial intelligence, or any other computational heavy lifting jobs using Iron, feel free to contact us at as we’ll be happy to chat.



Full Circle and Ramping up at was recently acquired by Xenon Ventures, a private equity, and venture capital firm. Xenon Ventures is headed by Jonathan Siegel, a serial entrepreneur who has founded many popular software services and has made just as many successful acquisitions.

Here comes the full circle. What may not know, is that Jonathan was’s first customer and investor back in 2010 prior to’s creation. Jonathan was client and friend of the founders’ consulting business prior to and encouraged the founders to transform their consulting service into a product.

In 2011 the first version of IronWorker was launched and the serverless revolution began. After pioneering this space, has grown significantly adding products like IronMQ, IronCache, and our latest development of our Open Source product, IronFunctions. This success is due all of our amazing customers and partners! Thank You!!

New and old faces

You may be seeing a new name fly around a bit as well.  I’ll (Dylan Stamat) be joining as General Manager and moving Iron forward. A little about me: I’ve been a personal friend to the founders, I was a co-founder at RightSignature (, founded one of the first HIPAA compliant companies to run on AWS (, was previously the CTO at a large technology consultancy company (ELC Technologies), a Ruby on Rails contributor, a committer to Ehcache, and a big fan of Erlang and Golang.

You will also see and hear from many other familiar faces at – Roman Kononov, Director of Engineering who has been leading our engineering office in Bishkek, Kyrgyzstan since 2011. Nikesh Shah, Director of Business Development and Marketing, and other various new and old members from our globally distributed teams.

Roman Kononov and Rob Pike (with an awesome jacket) at Gopherfest.

As of now – we’ve added 2 new offices to Iron: one in Las Vegas, Nevada, the other in Tokyo, Japan. We’ve hired new Engineers and Customer Success and are continuing to hire. Let us know if you have any interest in joining our team!

What’s to expect moving forward?

New graphs providing quick ways to visualize historical worker concurrency

The short answer is, things are going to get a lot better.  We’ve been very busy since the acquisition.  There have been a lot of bug fixes, improvements to internal tooling, and we’ve added concurrency graphs to help provide more insight into the system.

Near term, we are committed to continuing and ramping up development in our entire product line. This includes better performance and reliability, a new user interface, granular metrics reporting such as concurrency graphs, streamlining customer support and putting new systems in place to better track feature requests and bug reports, and bug fixes throughout our web applications.

Long term, as it relates to products, we are being guided by 2 core principles:
Open Source, and Hybrid Cloud Deployments.

I’m excited about the future, and getting to know all of you! We will have more exciting news to announce in the coming months.  Please feel to reach out to us and stay tuned!

Dylan Stamat

Top 10 Uses of a Worker System

A worker system is an essential part of any production-scale cloud application. The ability to run tasks asynchronously in the background, process tasks concurrently at scale, or schedule jobs to run on regular schedules is crucial for handling the types of workloads and processing demands common in a distributed application.

At, we’re all about scaling workloads and performing work asynchronously and we hear from our customers on a continuous basis. Almost every customer has a story on how they use the IronWorker platform to get greater agility, eliminate complexity, or just get things done. We wanted to share a number of these examples so that other developers have answers to the simple question “How do I use a worker system?” or “What can I do with a task queue?” Continue reading “Top 10 Uses of a Worker System”

In Books: The San Francisco Fallacy

One of Iron’s original investors, Jonathan Siegel, released a book this week that any entrepreneur (or anybody who’s worked in the bay area) should definitely read.  It’s titled, “The San Francisco Fallacy: The Ten Fallacies That Make Founders Fail“, and Jonathan does an amazing job writing about his personal experiences in the art and science of building businesses.

It just launched yesterday and is being offered for $2.99 this week only (see the link above).  We highly recommend grabbing a copy, and what follows is a great excerpt from the book.  Enjoy!


“It’s all about the tech.”

The Tech Fallacy is perhaps the most pervasive fallacy in the tech world. It is endemic and insidious—perhaps inextricable. It first tripped me up as a teenager in my very first tech venture—but that wasn’t enough to cure me, for I have fallen victim to it often since then.

The Tech Fallacy says it’s all about the tech. Tech is the be-all and end-all of what we do. Get the tech right and the rest will follow.

This belief is deeply, badly wrong—as I first discovered in my teens.

How I Failed as a Pornographer

My first tech business was a kind of online forum. It was called a bulletin board system, where members could chat and share software. It failed.

I launched my second online business two years later. It was another bulletin board system where members could chat and share software. Oh, and they could download porn.

I have my parents to thank for my incipient career as a pornographer. My father bought me my first computer in 1989, during one of his periods of regular employment.

It was an IBM clone with 640 kilobytes of ram and a 20-megabyte hard disk that weighed at least ten pounds. It had less power and memory than today’s inkjet printer.

The PC had a menu of random shareware on it: one of the most popular was called Lena.exe. It was just a grainy, scanned image of a Playboy Bunny (albeit fully clothed). You ran the program, and it pushed the pixels slowly onto the screen. It took minutes to load the full picture.

I soon outgrew the menu on the machine, and then I went exploring. The operating system, MS-DOS 3.0, came with a manual. I read it. I learned every command. I saw that there were things called “batch files.” I broke them open. This broke the computer. I watched it being fixed. I learned how to fix it myself (which was useful, because I kept breaking it). I learned how to write batch files. 

I did regular teardowns of my machine. The pieces were on big chips with big pins and on full-size circuit boards over a foot in length. With two screwdrivers, I could unscrew, unstrap, and pry apart everything but the few capacitors and resistors soldered to the emerald-green, silicon circuit boards.

Computers were so young then that it wasn’t clear to us what could go wrong, or why things broke. Disks would stop working and then work again. Displays wouldn’t display in one mode, but work in another. Reset a switch or copy a file and all would be mysteriously better. When something went wrong, it took laborious practice, by trial and error, to find the source of the problem.

This early digital technology was, in fact, fantastically unpredictable. It seemed magical that it worked at all, and as I developed my understanding of how it did work, my respect for that underlying magic increased.

I got into code. Like Neo learning to watch the Matrix, at first I just saw scrolling screens of ostensibly incomprehensible characters; gradually, I began to see patterns and life in them.

I started to search out greater challenges and discovered the bulletin board system (BBS) – a rudimentary precursor to the Internet. A modem was used to dial into a BBS at the cost of a normal call. The BBS allowed you to create a user profile, message others, chat in forums, download free software (shareware), and play games such as Trade Wars—a cheesy, text-based, space-frigate game.

The bulletin boards were a fertile environment for viruses, which spread easily via shareware. As a result, one of the highly sought early pieces of shareware was McAfee’s Virus Scanner, created by John McAfee in the 1980s.

McAfee uploaded his homemade virus scanner from his home to a local BBS, and it spread.
But McAfee’s shareware was also a currency in itself. If you met someone on a BBS and she mentioned that she had McAfee 2.052, and you had McAfee 2.088, then you had currency to trade with.

The bulletin boards placed tight restrictions on how many files you could download: typically, you had to upload one file to be allowed to download three – ensuring sustainability and growth for the BBS. So if you had some shareware that a BBS didn’t have, that would allow you to download three new pieces of shareware. And you could then use that to get new shareware from other bulletin boards.

But this was a different era of telecommunications, before cell phones. Landline calls within your local area were free, but long-distance calls were expensive. So if the BBS was locally-based, you could dial in for free; if it was further away, the cost would quickly get prohibitive – especially as downloads could take hours.

This created a market for more locally available software—a shareware broker. Local bulletin boards would set up to fill this market gap by downloading shareware from a distant BBS and providing it to local users for a subscription fee.

Most of these subscription bulletin boards provided a minimal free allowance to nonsubscribers, and because there was often more than one BBS within your toll-free zone, it was possible to seek out and trade shareware between boards. It was a classic network effect, with shareware spreading rapidly and efficiently at very low cost.

Then I saw an ad for a BBS in Sacramento that charged $60 per year and had twenty thousand subscribers. I thought I had misread it—$120,000 per year? For running a BBS—that is, for keeping a computer and a modem plugged in?

“I could do that!” I thought. And so I did.

Before long, I was hovering in front of my monitor late into the night, watching users work away on my BBS. In those days, you could see the screen your users saw and what they typed. “Analytics” meant staring at the monitor and watching what they were doing.

 I added a notice to the BBS that came up upon login that said I would accept donations. In June 1991, I got my first check in the post for $20.

I thought it would be the first $20 of $120,000. But it turned out to be one of the only checks I received. And it took me another year, and the onset of puberty, to realize that the distant BBS in Sacramento had another section in its files area—one I hadn’t previously discovered.

This wasn’t freeware. It was photos. Lots and lots and lots of photos. Salacious, compromising, illicit photos. There were even a few with crude, jiggly animations of bits bobbing to and fro.

The clue was in the ad, but I had missed it. I had thought the “XXX” was just some elementary formatting.

I had been duped by my prepubescent naivety. It wasn’t the pleasure of using my BBS that people were willing to pay for; it was an altogether more adult pleasure.

My excitement for the BBS drained. I shut it down and asked my mother to help me find a job. She took me to McDonald’s.

I came home in despair, sat down at the kitchen table, and took up the phone book. This was in the Bay Area, so there were pages of computer companies. I started calling. But I was a kid and nobody took me seriously. I kept calling.

Finally, somebody listened, invited me in for a chat, and eventually offered me a job. The company was called ZOZ Computers. I had cold-called my way right through the phone book.

Months into this first employment, I told my new boss about my BBS failure. I wanted to beat my competitors at their own game.

“What’s stopping you?” he asked.

“I’m too young to buy porn,” I said.

“I’ll buy it for you,” he replied. 

He ordered a set of CDs with porn images. I bought a six-disc CD changer for my computer and had three phone lines installed in my bedroom with modems. My mother was working long hours at the time and didn’t notice.

I had built myself a 386 computer and put my old 286 to work answering the phones. One Saturday in 1993, I announced my new BBS via a $15 ad in the local computer trader paper.
Like sixteen-year-olds all over the United States that weekend, I spent a lot of time in my bedroom because of porn. But I suspect there were few others—if any—whose interest was more entrepreneurial than voyeuristic. Though I was a voyeur too, in a sense—stalking my users as they navigated my BBS.

From the moment the red LED lights on the modems first lit up and the modem started to whir, signaling an incoming call, I was hooked to my screen, fascinated by what those callers were doing. When the lights came on, I felt a surge of pride and accomplishment; when they hung, indicating that the system had crashed, I felt a profound sense of failure. 

Users could get three photos a day for free, limited to ten a month, and they were limited to two hours in total online in a month. If they attempted to exceed that, they’d get a message: subscribe.

In order to become a subscriber, they had to download a pack of documents and sign and return them to me with a check: $35 per year. I remember watching my first pack being downloaded and the buzz of thinking, here comes my first customer.

I was aiming for one thousand subscribers. I had the latest tech, decent design, and a good stock of images. But six weeks later, I had made just over $400.

I had no ability to charge credit cards and was relying on people to send checks. Not having a credit card myself, because I was fifteen, it hadn’t occurred to me that I’d need to process credit cards.

Still, I was giddy with success and wanted to share it. I confided in one of the adults I respected, my mother’s landlord. He told her. She wasn’t so thrilled that her teenage son was a pornographer (and she wasn’t so impressed by the distinction between a pornographer and a pornography trader, either). She told me to shut it down. Between that and the too-slow income stream, I decided not to argue my First Amendment rights. At age sixteen, I’d notched up my second tech failure.

The Tech Fallacy Revisited

Selling porn taught me about the Tech Fallacy. I had believed that building great technology must mean that you’re building a great business. That it was all about the tech.

But selling porn taught me that the raison d’être for any business is to give the customer what he wants. He doesn’t want the tech; he wants what the tech can deliver. The tech is just the means to an end.

I thought I could make money from a well-built-and-run bulletin board system; however, decoding those ads in the computer trader papers with their XXXs made me realize that the market was more interested in the XXX than the BBS.

I love good tech. But I’ve learned to follow the good business. It’s a better path.

Take two rival companies. Each is armed with $1 million in investments. One spends $900,000 on its technology development, with $100,000 reserved for going to market (i.e., customer development, sales, and marketing). The other spends $100,000 on technology and $900,000 on going to market.

Who wins? The market-driven one does. It’s not the better product that wins; it’s the product that best knows how to reach its market.

If a thriving company made you its CEO and you decided to let go of its sales and marketing divisions to focus more on the technology, the board would fire you. But walk into almost any two-year-old funded startup, and you’ll see a growing development team budget and a speck, if any, allotted to sales and marketing.

Imagine an upstart competitor trying to challenge an entrenched leader without a sales and marketing division—it would be like a one-legged man in an ass-kicking contest.

Yet in the startups that I encounter, if the company has a team of ten, there’ll be nine developers and just one person who is business driven. Contrast that with companies that have gone public: you’ll see ninety salespeople for ten developers.

Why is this? Partly, it’s intrinsic. People who love what they do often prefer to do it to the exclusion of other things and may not even realize they’re doing this. Tech companies tend to be founded by people who love tech. A single-minded focus on the tech is to be expected but guarded against.

But it’s also a feature of the zeitgeist—the spirit of the times. This takes us back to the first dot-com era. As we’ll see later, the dot-com bubble was characterized by a focus on the idea to the exclusion of all else—even the tech.

When that bubble burst, it left a bad taste in people’s mouths, especially in the investment community. Tech startups acquired the reputation of being charlatans—all talk, no substance.

This perception created a pendulum swing: today, the emphasis in the startup market is often on developing innovative, hardcore technology, with a consequent failure to consider other crucial (maybe more crucial) aspects of the business.

There is a happy medium. Tech is helping to redefine how the world works—how we work and play, find our soul mates and flings, tell our stories, and hail a ride. Tech is required to catalyze these shifts and disruptions. We all love good tech.

But the winners will be those who build the best businesses, not the best tech.


Securing Serverless

Guy Podjarny published a great blog post discussing the Serverless space from a security perspective. I highly recommend reading it as it touches on some great points, going over both the security benefits and possible risks.

Two points he made definitely stood out to me, and the first was the concept of a greater attack surface. When I explain FaaS (Functions as a Service) to people, many immediately equate a function as being synonymous to a simple API endpoint. To a degree, they are correct. Then, what’s the difference, and why should we look at security in regard to both from different perspectives? I believe the differentiator becomes how the endpoint is exposed, and what its purpose is.

Standard API endpoints will often belong to a broader application or set of microservices that reside behind a shared layer of security. This could be dedicated network hardware, hardened reverse proxies, etc. As a security minded developer, you develop your endpoint and consider the possible client side attack vectors (Guy points to the OWASP Top Ten guide (Open Web Application Security Project) which is a great place to start; Thomas Ptacek also has a great list here); then possibly move on to write another endpoint, which will share these concerns, all the time relying on that first level layer of security.

When you start developing a suite of functions, things can start to get fragmented. Dependencies start to change between functions, software versions might differ, and the ways the functions are triggered may require different configurations on the network/gateway layer.

The second point that stood out was around monitoring: There are countless battle-tested monitoring solutions out there, but the way functions are deployed and used within an underlying architecture might leave them completely out of their scope. Guy makes a great point about how many of these products are agents that rely on long running processes to keep an eye on and collect from. In order to monitor functions, different techniques need to be implemented for short-lived and hot processes.

All of these are great problems to have and point to fast moving innovation in already fast moving industries. You’ll see most vendors and platforms already tackling these issues and building solutions into their products. This space is still young! Here at Iron, we’re committed to helping make IronFunctions become a respected open source solution for delivering FaaS to wherever you want to deploy it.

IronFunctions Alpha 2

Today we are excited to announce the second alpha release of IronFunctions, the language-agnostic serverless microservices platform that you can run anywhere; on public, private, and hybrid clouds, even on your own laptop.

The initial release of IronFunctions received some amazing feedback and we’ve spent the past few months fixing many of the issues reported. Aside from fixes, the new release comes with a whole host of great new features, including:

Long(er) running containers for better performance aka Hot Functions
LRU Cache
Triggers example for OpenStack project Picasso
Initial load balancer
fn: support route headers tweaks
fn: Add rustlang support
fn: Add .NET core support
fn: Add python support

Stay tuned for the upcoming posts for insights about individual features such as the LRU, load balancer and OpenStack integrations.

What’s next?

We will be releasing a Beta with more fixes, improvements to the load balancer, and a much-anticipated new feature that will allow chaining of functions.

We’re excited to hear people’s feedback and ideas, and it’s important that we’re building something that solves real world problems so please don’t hesitate to file an issue, or join us for a chat in our channel on our Slack Team.

Thanks for all the love and support,
The Team

Discuss on Hacker News
Join our Slack
File an Issue
Contact about enterprise support

Announcing Hot Functions for IronFunctions

IronFunctions is a serverless application platform. Unlike AWS Lambda it’s open-source, can run on any cloud — public, on-premise, or hybrid, and language agnostic, while maintaining AWS Lambda compatibility.

The initial release of IronFunctions received some amazing feedback and the past few weeks were spent addressing outstanding issues. In this post I will be highlighting the biggest feature with the upcoming release, Hot Functions.


Hot Functions improves IronFunctions throughput by 8x (depending on duration of task). By re-using containers or what we call Hot Functions each call is reduced by 300ms.


Before Hot Functions, IronFunctions would spin up a new container to handle every job. This led to a 300ms overhead per job due to container startup time.

With Hot Functions, long-lived containers are able to serve the same type of task without incurring the startup time penalty. They do this by taking incoming workloads and feeding in through standard input and writing to standard output. In addition, permanent network connections are reused. For more information on implementing Hot Functions, see the Github docs.

We ran our benchmark on a 1 GB Digital Ocean instance and used to plot the results.

Simple function printing “Hello World” called for 10s (MAX CONCURRENCY = 1).

Hot Functions have 162x higher throughput.

Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1).

Hot Functions have 139x higher throughput.

By combining Hot Functions with concurrency we saw even better results: 

Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7).

Hot Functions have 7.84x higher throughput.

There’s more to this release as well. IronFunctions brings Single Flight pattern for DB calls as well as stability and optimization fixes across the board.

IronFunctions is maturing quickly and our community is growing. To get involved, please join our Slack community and check out IronFunctions today!

Also stay tuned for upcoming announcements by following this blog and our developer blog.

Hacker News conversation here.

Announcing Project Picasso – OpenStack Functions as a Service

We are pleased to announce a new project to enable Functions as a Service (FaaS) on OpenStack — Picasso.

The mission is to provide an API for running FaaS on OpenStack, abstracting away the infrastructure layer while enabling simplicity, efficiency, and scalability for both developers and operators.

Picasso can be used to trigger functions from OpenStack services, such as Telemetry (via HTTP callback) or Swift notifications. This means no long running applications, as functions are only executed when called.

Picasso is comprised of two main components:

  • Picasso API
    • The Picasso API server uses Keystone authentication and authorization through its middleware.
  • IronFunctions
    • Picasso leverages the backend container engine provided by IronFunctions, an open-source Serverless/FaaS platform based on Docker.



We’ve created some initial blueprints to show what the future roadmap looks like for the project.

You can try out Picasso now on DevStack by following the quick start guide here. Let us know what you think!

If you’re interested in contributing or just have any questions, please join us on the #OpenStack channel in Slack.