What is a Docker Image? (And how do you use one with IronWorker?)

What is a Docker image?

Love them or hate them, containers have become part of the infrastructure running just about everything. From Kubernetes to Docker, almost everyone has their version of containers. The most commonly used is still Docker. IronWorker was among the very first to combine serverless management with containers. In this article we will give a high-level overview of what a Docker image is, and how IronWorker uses them.

So, What is a Docker image?

To start, we need to have an understanding of the Docker nomenclature and environment. There is still not a clear consensus on terms in regards to containers. What Docker calls one thing, Google calls another, and so on. We will only focus on Docker here. (for more on Docker vs Kubernetes, read here).

Docker has three main components that we should know about in relation to IronWorker:

  1. Docker file
  2. Docker image
  3. Docker container

1) Docker File

A Docker file is the set of instructions to create a Docker image.

Let’s keep it simple. Docker files are configuration files that “tell” Docker images what to install, update, etc. Basically the Docker file says what to build that will be the Docker image.

2) Docker Image

A Docker image is the set of processes outlined in the Docker file. It is helpful to think of these as templates created by the Docker files. These are arranged in layers automatically. Each layer is dependent on the layer below it. Each layer then becomes more abstracted than the layer below.

By abstracting the actual “instructions” (remember the Docker files?), an environment that can function with its resources isolated is created. While virtual machines relied on calling OS level resources, containers have eliminated this. In turn, this creates a lightweight and highly scalable system. IronWorker takes these images and begins the process of creating and orchestrating complete containers. What exactly is the difference between a Docker image and a Docker container? Let’s see.

3) Docker Containers

Finally we come to the containers. To simplify, we can say that when a Docker image is instantiated it becomes a container. By creating an instance that draws on system resources like memory, the container begins to carry out whatever processes are together within the container. While separate image layers may have different purposes, Docker containers are formed to carry out single, specific tasks. We can think of a bee vs. a beehive. Individual workers carry out asynchronous tasks to achieve a single goal. In short, containers are packages which hold all of the required dependencies to run an application.

After the container has been run, The Docker image is inert and inactive. This is because Docker image has carried out its purpose and now serves only as a meta reference.

IronWorker and Docker

So, you have your containers configured and everything is ready to go. What next? While Docker containers can function on their own, things like scaling workloads is much faster, more reliable, and easier with an orchestrator. IronWorker is one such container orchestrator, with some unique properties. 

An orchestrator adds another layer of abstraction to implementing and running containers. This has become known as “serverless” in recent years. While there is no such thing as a truly serverless, the term simply means there is no server management involved. By this point in the configuration, we have likely all but forgot about our original Docker image.

Forgetting about managing a server to react to spikes in traffic or other processing needs greatly simplifies a developer’s job. Tasks and other processes are scaled automatically. At the same time this allows for detailed analytics. Because the containers are managed by IronWorker, whether they are short lived or take days, the jobs are completed with minimal developer input after the initial setup.

What about migrating to other clouds or on-premise?

Traditionally, containers have been cloud based. As new options develop beyond just Amazon Web Services, the need to deploy flexible tools increases. Obviously devops changes frequently. Sometimes it even changes daily. One of the key benefits to IronWorker is that exporting your settings (as Docker images) and continuing on, either redundantly or in new iterations, in varying environments is the easiest in the marketplace. This includes deploying fully on-premise. This ability to maintain freedom from vendor lock-in and future needs is what separates IronWorker from the rest.

Start IronWorker now with a free 14 day trial here.

Google Cloud Run: Review and Alternatives

Introduction

Google Cloud Run is a new cloud computing platform that’s hot off the presses from Google, first announced at the company’s Google Cloud Next conference in April 2019. Google Cloud Run has generated a lot of excitement (and a lot of questions) among tech journalists and users of the public cloud alike, even though it’s still in beta.

We will discuss the ins and outs of Google Cloud Run in this all-in-one guide, including why it appeals to many Google Cloud Platform customers, what are the features of Google Cloud Run, and a comparison of the Google Cloud Run alternatives.

What Is Google Cloud Run (And How Does It Work?)

What is serverless computing?

To answer the question “What is Google Cloud Run?,” we first need to define serverless computing.

Often just called “serverless,” serverless computing is a cloud computing paradigm that frees the user from the responsibility of purchasing or renting servers to run their applications on.

(Actually, the term “serverless” is a bit of a misnomer: The code still runs on a server, just not one that the user has to worry about.)

Cloud computing has soared in popularity over the past decade. This is thanks in large part to the increased convenience and lower maintenance requirements. Traditionally, however, users of cloud services have still needed to set up a server, scale its resources when necessary, and shut it down when you’re done. This has all changed with the arrival of serverless.

The phrase “serverless computing” is applied to two different types of cloud computing models:

  • BaaS (backend as a service) outsources the application backend to the cloud provider. The backend is the “behind the scenes” part of the software for purposes such as database management, user authentication, cloud storage, and push notifications for mobile apps.
  • FaaS (function as a service) still requires developers to write code for the backend. The difference is this code is only executed in response to certain events or requests. This enables you to decompose a monolithic server into a set of independent functionalities, making availability and scalability much easier.

You can think of FaaS serverless computing as like a water faucet in your home. When you want to take a bath or wash the dishes, you simply turn the handle to make it start flowing. The water is virtually infinite, and you stop when you have as much as you need, only paying for the resources that you’ve used.

Cloud computing without FaaS, by contrast, is like having a water well in your backyard. You need to take the time to dig the well and build the structure, and you only have a finite amount of water at your disposal. In the event that you run out, you’ll need to dig a deeper well (just like you need to scale the server that your application runs on).

Regardless of whether you use BaaS or FaaS, serverless offerings allow you to write code without having to worry about how to manage or scale the underlying infrastructure. For this reason, serverless has come into vogue recently. In a 2018 study, 46 percent of IT decision-makers reported that they use and evaluate serverless.

What are containers?

docker containers

Now that we’ve defined serverless computing, we also need to define the concept of a container. (Feel free to skip to the next section if you’re very comfortable with your knowledge of containers.)

In the world of computing, a container is an application “package” that bundles up the software’s source code together with its settings and dependencies (libraries, frameworks, etc.). The “recipe” for building a container is known as the image. An image is a static file that is used to produce a container and execute the code within it.

One of the primary purposes of containers is to provide a familiar IT environment for the application to run in when the software is moved to a different system or virtual machine (VM).

Containers are part of a broader concept known as virtualization, which seeks to create a virtual resource (e.g., a server or desktop computer) that is completely separate from the underlying hardware.

Unlike servers or machine virtualizations, containers do not include the underlying operating system. This makes them more lightweight, portable, and easy to use.

When you say the word “container,” most enterprise IT staff will immediately think of one, or both, of Docker and Kubernetes. These are the two most popular container solutions.

  • Docker is a runtime environment that seeks to automate the deployment of containers.
  • Kubernetes is a “container orchestration system” for Docker and other container tools, which means that it manages concerns such as deployment, scaling, and networking for applications running in containers.

Like serverless, containers have dramatically risen in popularity among users of cloud computing in just the past few years. A 2018 survey found that 47 percent of IT leaders were planning to deploy containers in a production environment, while 12 percent already had. Containers enjoy numerous benefits: platform independence, speed of deployment, resource efficiency, and more.

Containers vs. serverless: A false dilemma

Given the massive success stories of containers and serverless computing, it’s hardly a surprise that Google would look to combine them. The two technologies were often seen as competing alternatives before the arrival of Google Cloud Run.

Both serverless and containers are intended to make the development process less complex. They do this by automating much of the busy work and overhead. But they go about it in different ways. Serverless computing makes it easier to iterate and release new application versions, while containers ensure that the application will run in a single standardized IT environment.

Yet nothing prevents cloud computing users from combining both of these concepts within a single application. For example, an application could use a hybrid architecture, where containers can pick up the slack if a certain function requires more memory than the serverless vendors has provisioned for it.

As another example, you could build a large, complex application that mainly has a container-based architecture, but that hands over responsibility for some backend tasks (like data transfers and backups) to serverless functions.

Rather than continuing to enforce this false dichotomy, Google realized that serverless and containers could complement one another, each compensating for the other one’s deficiencies. There’s no need for users to choose between the portability of containers and the scalability of serverless computing.

Enter Google Cloud Run…

What is Google Cloud Run?

In its own words, Google Cloud Run “brings serverless to containers.” Google Cloud Run is a fully managed platform that is capable of running Docker container images as a stateless HTTP service.

Each container can be invoked with an HTTP request. All the tasks of infrastructure management–provisioning, scaling up and down, configuration, and management–are cleared away from the user (as typically occurs with serverless computing).

Google Cloud Run is built on the Knative platform, which is an open API and runtime environment for building, deploying, and managing serverless workloads. Knative is based on Kubernetes, extending the platform in order to facilitate its use with serverless computing.

In the next section, we’ll have more technical details about the features and requirements of Google Cloud Run.

Google Cloud Run Features and Requirements

Features

Google cites the selling points below as the most appealing features of Google Cloud Run:

  • Easy autoscaling: Depending on light or heavy traffic, Google Cloud Run can automatically scale your application up or down.
  • Fully managed: As a serverless offering, Google Cloud Run handles all the annoying and frustrating parts of managing your IT infrastructure.
  • Completely flexible: Whether you prefer to code in Python, PHP, Pascal, or Perl, Google Cloud Run is capable of working with any programming language and libraries (thanks to its use of containers).
  • Simple pricing: You pay only when your functions are running. The clock starts when the function is spun up, and ends immediately once it’s finished executing.

There are actually two options when using Google Cloud Run: a fully managed environment or a Google Kubernetes Engine (GKE) cluster. You can switch between the two choices easily, without having to reimplement your service.

In most cases, it’s best to stick with Google Cloud Run itself, and then move to Cloud Run on GKE if you need certain GKE-specific features, such as custom networking or GPUs. However, note that when you’re using Cloud Run on GKE, the autoscaling is limited by the capacity of your GKE cluster.

Google Cloud Run requirements

Google Cloud Run is still in beta (at the time of this writing). This means that things may change between now and the final version of the product. However, Google has already released a container runtime contract describing the behavior that your application must adhere to in order to use Google Cloud Run.

Some of the most noteworthy application requirements for Google Cloud Run are:

  • The container must be compiled for Linux 64-bit, but it can use any programming language or base image of your choice.
  • The container must listen for HTTP requests on the IP address 0.0.0.0, on the port defined by the PORT environment variable (almost always 8080).
  • The container instance must start an HTTP server within 4 minutes of receiving the HTTP request.
  • The container’s file system is an in-memory, writable file system. Any data written to the file system will not persist after the container has stopped.

With Google Cloud Run, the container only has access to CPU resources if it is processing a request. Outside of the scope of a request, the container will not have any CPU available.

In addition, the container must be stateless. This means that the container cannot rely on the state of a service between different HTTP requests, because it may be started and stopped at any time.

The resources allocated for each container instance in Google Cloud Run are as follows:

  • CPU: 1 vCPU (virtual CPU) for each container instance. However, the instance may run on multiple cores at the same time.
  • Memory: By default, each container instance has 256 MB of memory. Google says this can be increased up to a maximum of 2 GB.

Cloud Run Pricing

Google cloud run pricing

Google Cloud Run uses a “freemium” pricing model: free monthly quotas are available, but you’ll need to pay once you go over the limit. These types of plans frequently catch users off guard. They end up paying much more than expected. According to Forrester, a staggering 58% of companies surveyed said their costs exceeded their estimates.

The good news for Google Cloud Run users is that you’re charged only for the resources you use (rounded up to the nearest 0.1 second). This is typical of many public cloud offerings.

The free monthly quotas for Google Cloud Run are as follows:

  • CPU: The first 180,000 vCPU-seconds
  • Memory: The first 360,000 GB-seconds
  • Requests: The first 2 million requests
  • Networking: The first 1 GB egress traffic (platform-wide)

Once you bypass these limits, however, you’ll need to pay for your usage. The costs for the paid tier of Google Cloud Run are:

  • CPU: $0.000024 per vCPU-second
  • Memory: $0.0000025 per GB-second
  • Requests: $0.40 per 1 million requests
  • Networking: Free during the Google Cloud Run beta, with Google Compute Engine networking prices taking effect once the beta is over.

It’s worthwhile to note you are billed separately for each resource; for example, the fact that you’ve exceeded your memory quota does not mean that you need to pay for your CPU and networking usage as well.

In addition, these prices may not be definitive. Like the features of Google Cloud Run, prices for Google Cloud are subject to change once the platform leaves beta status.

Finally, Cloud Run on GKE uses a separate pricing model that will be announced before the service reaches general availability.

Google Cloud Run Review: Pros and Cons

Because it’s a brand new product product that’s still in beta, reputable Google Cloud Run reviews are still hard to find.

Reaction to Google’s announcement has been fairly positive, acknowledging the benefits of combining serverless computing with a container-based architecture. Some users believe that the reasonable prices will be enough for them to consider switching from similar services such as AWS Fargate.

Other users are more critical, however, especially given that Google Cloud Run is currently only in beta. Some are worried about making the switch, given Google’s track record of terminating services such as Google Reader, as well as their decision to alter prices for the Google Maps API, which effectively shut down many websites that could not afford the higher fees.

Given that Google Cloud Run is in beta, the jury is still out on how well it will perform in practice. Google does not provide any uptime guarantees for cloud offerings before they reach general availability.

The disadvantages of Google Cloud Run will likely overlap with the disadvantages of Google Cloud Platform as a whole. These include the lack of regions when compared with competitors such as Amazon and Microsoft. In addition, as a later entrant to the public cloud market, Google can sometimes feel “rough around the edges,” and new features and improvements can take their time to be released.

Google Cloud Run Alternatives

Since this is a comprehensive review of Google Cloud Run, we would be remiss if we didn’t mention some of the available alternatives to the Google Cloud Run service.

In fact, Google Cloud Run shares some of its core infrastructure with two of Google’s other serverless offerings: Google Cloud Functions and Google App Engine.

  • Google Cloud Functions is an “event-driven, serverless compute platform” that uses the FaaS model. Functions are triggered to execute by a specified external event from your cloud infrastructure and services. As with other serverless computing solutions, Google Cloud Functions removes the need to provision servers or scale resources up and down.
  • Google App Engine enables developers to “build highly scalable applications on a fully managed serverless platform.” The service provides access to Google’s hosting and tier 1 internet service. However, one limitation of Google App Engine is that the code must be written in Java or Python, as well as use Google’s NoSQL database BigTable.

Looking beyond the Google ecosystem, there are other strong options for developers who want to leverage both serverless and containers in their applications.

The most tested Cloud Run alternative: Iron.io

Iron.io is a serverless platform that offers a multi-cloud, Docker-based job processing service. As one of the early adopters of containers, we have been a major proponent of the benefits of both technologies.

The centerpiece of Iron.io’s products, IronWorker is a scalable task queue platform for running containers at scale. IronWorker has a variety of deployment options. Anything from using shared infrastructure to running the platform on your in-house IT environment is possible. Jobs can be scheduled to run at a certain date or time, or processed on-demand in response to certain events.

In addition to IronWorker, we also provide IronFunctions, an open-source serverless microservices platform that uses the FaaS model. IronFunctions is a cloud agnostic offering that can work with any public, private, or hybrid cloud environment, unlike services such as AWS Lambda. Indeed, Iron.io allows AWS Lambda users to easily export their functions into IronFunctions. This helps to avoid the issue of vendor lock-in. IronFunctions uses Docker containers as the basic unit of work. That means that you can work with any programming language or library that fits your needs.

Conclusion

Google Cloud Run represents a major development for many customers of Google Cloud Platform who want to use both serverless and container technologies in their applications. However, Google Cloud Run is only the latest entrant into this space, and may not necessarily be the best choice for your company’s needs and objectives.

If you want to determine which serverless + container solution is right for you, speak with a skilled, knowledgeable technology partner like Iron.io who can understand your individual situation. Whether it’s our own IronWorker solution, Google Cloud Run, or something else entirely, we’ll help you get started on the right path for your business.

Introducing: Computerless™

Iron was one of the pioneers of Serverless, so we’re excited to announce that we’ll also be one of the first companies to offer the next generation of compute:  It’s called Computerless™.

Unlike Serverless, this technology removes the physical machine completely.  Our offering piggy-backs off the recent developments in fiber optic technology developed at the University of Oxford.  If you haven’t heard about this breakthrough, we’ll do our best to explain:

Researchers have found a way to control how light travels at the molecular level, thus being in complete control of the resulting attenuation.  Molecular gates can then be created, and state stored in finite wavelengths. It’s somewhat equivalent to qubits in quantum computing, but in the case of optical fiber, it’s a physical reality.

The end result of this technological release allows for computers to be fully encapsulated in fiber optic cable.  The usual components needed are now mapped 1-to-1, via light. This has allowed Iron’s infrastructure to completely change.  While we’ve run our infrastructure on public clouds like AWS and GCP in the past, we’ve been able to leave that all behind. We’re now able to push our entire suite of products into optical cable itself:


Iron’s new and improved infrastructure on a cheap plot of land in Arkansas

In the next few months, we’ll be pushing all of our customer’s sensitive data into the cables shown above as well as running all Worker jobs through them.  We’re pretty sure the cables we purchased are for multi-tenant applications, so you can probably rest assured that we’re doing the right thing. In fact, NASA has already expressed an interest in licensing this technology from Iron. Other interested parties include the government of French Guiana and defense conglomerate Stark Industries. 

Researchers have kind-of concluded that this technology is ready for prime time, and also are quick to state the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.

Docker, Inc isn’t Dead

Chris Short recently wrote up a piece entitled Docker, Inc is Dead, with a prediction that the company would no longer exist sometime in 2018.  It’s well written and he does a good job of running through some of Docker’s history in recent years.  Although I agree with some of his sentiments, I don’t think Docker, Inc will exit the stage anytime soon.  Here are some reasons I think Docker, Inc will live a healthy life in 2018.

Docker is Good Software

This was the first point in Chris’ piece, and he’s right.  Docker definitely helped widen the spotlight on *n?x kernels.  Discussions around namespaces, cgroups, lxc, zones, jails, etc… lit up across communities in different disciplines.  Dockers’ simple interface lowered the barrier of entry for non-administrators, and the developer community immediately added it to their workflows.  Docker released EE/UCP, and larger organizations jumped on board.  It “is” good software for developers, SMB’s, and large organizations, and Docker, Inc isn’t slowing down development efforts.

DOCKER HAS FRIENDS

“I’m really excited to welcome Solomon and Docker to the Kubernetes community”.  Brendan Burns (of Microsoft, Lead Engineer of Kubernetes) definitely made me do a double take when he said that on stage at DockerCon EU a few months ago.  Many people I spoke to at the conference referenced that statement and saw this as a big blow to Docker.  “Who’s joining who’s community? ”  The thing is, the real purpose of Brendan’s talk was about the collaboration between companies, and the effort to make our lives as developers and administrators better.  The whole “it takes a village to raise a child” saying.  This village is composed of some of the brightest engineers from many of the world’s largest companies, and they’re all striving to make things better.  Docker and Kubernetes worked together, and the Kubernetes integration into UCP made perfect sense.

Docker has business

They don’t have a lack of coherent leadership.  They’ve received a ton of money, their marketing is great, and they’re acting like what they are;  a rapidly growing company moving into the enterprise market.  Were some of their keynotes awkward at DockerCon EU this year?  Yes.  Were there fantastic sessions from customers who shared real-life Docker success stories?  Yes.  Have they made some mistakes here and there?  Yes.  Have they moved past those and grown?  Yes.  If you’ve been around the block and watched small companies rapidly grow into behemoths, this is all typical.  Growing isn’t easy.  Their “Modernizing Enterprise Applications” mantra is perfect.  There are countless technical budgets from Fortune 10,000 companies that Docker, Inc will capitalize on.  The best part is that they’ll actually be making a positive difference.  They are not snake-oil salesmen.  These companies will probably see real ROI in their engagements.

Conclusion

Docker, Inc isn’t going to be acquired (yet) or close their doors.  There is a lot going on at Docker, Inc right now but they aren’t signs of a company that is getting ready for a sale.

It’s a company that’s based on OSS with a lot of opportunity in the market.  While one of the products at Iron is Docker-based, we use a wide variety of software from many companies with roots in OSS.  We’re happy to pay for a higher level of support and features for OSS software backed by a business.  For other projects, we often donate through Open Collective to help maintainers and small development teams.  Docker’s donation of containerd was a great move and I think it is a project that fits perfectly into CNCF’s charter.

While Docker, Inc is moving upstream, they haven’t at all abandoned its real users;  developers.   We use Docker daily, contribute back when we can, and are optimistic about its trajectory as a business and a product.  Docker, Inc has a lot of room to grow, and in 2018, it will.

* These fields are required.


The Overhead of Docker Run

First published on Medium on 10/11/2016.

We use Docker a lot. Like a lot, lot. While we love it for a lot of things, it still has a lot of room for improvement. One of those areas that could use improvement is the startup/teardown time of running a container.

The Test

To test the overhead of running a Docker container, I made a script that compares execution times for various docker run options vs not using Docker at all. The script that I’m running is a simple hello world shell script that consists of the following:

echo "Hello World!"

The base Docker image is the official Alpine linux image plus the script above.

4 Things to Compare

    1. As a baseline, the first measurement is sans Docker. This is just running the hello.sh script directly.
    1. The second measure is just docker run IMAGE.
    1. The third measure adds the “rm” flag to remove the container after execution.
  1. The final one is to use docker start instead of run, so we can see the effect of reusing an already created container.

Docker for Mac

Server Version: 1.12.2-rc1

Running: ./hello.sh
avg: 5.897752ms
Running: docker run treeder/hello:sh
avg: 988.098391ms
Running: docker run — rm treeder/hello:sh
avg: 999.637832ms
Running: docker start -a reuse
avg: 986.875089ms

(Note: looks like using Ubuntu as a base image is slightly faster than Alpine, in the 10–50ms range).

Docker on Ubuntu

Server Version: 1.12.1

Running: ./hello.sh
avg: 2.139666ms
Running: docker run treeder/hello:sh
avg: 391.171656ms
Running: docker run — rm treeder/hello:sh
avg: 396.385453ms
Running: docker start -a reuse
each: 340.793602ms

Results

As you can see from the results above, using Docker adds nearly a full second to the execution time of our script on Mac and ~390ms on Linux (~175x slower than running the script without Docker).

Now this may not be much of an issue if your script/application runs for a long period of time, but it is certainly an issue if you run short lived programs.

Try it yourself

Feel free to try running the script on your system and share the results! You can find everything you need here:https://github.com/treeder/dockers/tree/master/hello

Just clone that repo, cd into the hello directory and run:

go run time.go


Already ready to get started with IronWorker?

IronWorker offers a free 14 day trial. Signup here.

Buzzwords: Microservices, Containers and Serverless at Goto Chicago

Goto Chicago Dave Speaking

It was an honor to give a talk on the future of Serverless at goto Chicago, an enterprise developer conference running from May 24 to 25, 2016. As you can see from the full room, containers, microservices and serverless are popular topics with developers, and this interest extends across a wide swath of back-end languages, from Java to Ruby to node.js. Unfortunately, the talk was not recorded, so I’m providing these notes (and my slide deck) for those who could not attend.

The Evolution of Deployed Applications

Before we look forward into the future of Serverless, let’s look back. We’ve seen a historical evolution in deployed applications at multiple different levels. Whereas before the unit of scale was measured by how many servers you could deploy, we’ve moved through rolling out virtual machines to the current pattern of scaling our containerized infrastructure. Similarly, we’ve seen a shift from monolithic architectures deployed through major releases to containerized, continuously-updated microservices. This paradigm is Iron.io’s “sweet spot,” and we’re leading the enterprise towards a serverless computing world.

Continue reading “Buzzwords: Microservices, Containers and Serverless at Goto Chicago”

Gartner Names Iron.io on 2016 “Cool Vendor” List

vTime_Gartner_160512_150155

Here’s some cool news. Iron.io was recently named a “Cool Vendor” in the Cool Vendors in Platform as a Service, 2016[1] report by Gartner. The report puts Iron.io on an extremely short list with just three other vendors in the space: Clusterpoint in London, England; Flybits in Toronto, Canada; and Neoway out of Florianopolis, Brazil.

The Cool Vendors research by Gartner is designed to help CIOs and other top IT leaders stay ahead of the IT technology curve. It also helps them make better strategic decisions about technology and services. “The vendors in this report offer new platform opportunities for business and IT, in response to increasing demand for intelligent business operations with cloud levels of scale, agility and responsiveness,” the report states. Continue reading “Gartner Names Iron.io on 2016 “Cool Vendor” List”

Microcontainers, and Logging in Docker: Iron.io CTO speaks at Docker NYC

microcontainers-banner

Travis Reeder, the co-founder and CTO of Iron.io, spoke at last night’s Docker NYC meetup about Microcontainers. In addition, Hermann Hesse of Sumo Logic spoke about Logging in Docker.

Slack for iOS Upload

Iron.io is a big proponent of microcontainers, which are minimalistic docker containers that can still process full-fledged jobs. We’ve seen microcontainers gaining traction amongst software architects and developers because their minimalistic size makes them easy to download and distribute via a docker registry. Microcontainers are easier to secure due to the small amount of code, libraries and dependencies, which reduces the attack surface and makes the OS base more secure. Continue reading “Microcontainers, and Logging in Docker: Iron.io CTO speaks at Docker NYC”

GoSF: The 1.6 Release Party at Docker HQ

Go 1.6 Launch Party

Lightning, thunder, and even hail swept through SF yesterday. But, that didn’t deter hundreds from hustling to Docker’s HQ for the Go 1.6 release party! GoSF received over 470 signups, a nice sum for a relatively young language.

Yesterday’s launch party boasted trivia, stuffed gopher giveaways, and a limited run T-shirt from Iron.io’s Bruce Lu. Oh, and as always there were some great talks.

Video of the talks will also be online soon! For the impatient, we’ve also included summaries and slides of last night’s talks below. Continue reading “GoSF: The 1.6 Release Party at Docker HQ”

Microcontainers – Tiny, Portable Docker Containers

Docker enables you to package up your application along with all of the application’s dependencies into a nice self-contained image. You can then use use that image to run your application in containers. The problem is you usually package up a lot more than what you need so you end up with a huge image and therefore huge containers. Most people who start using Docker will use Docker’s official repositories for their language of choice, but unfortunately if you use them, you’ll end up with images the size of the empire state building when you could be building images the size of a bird house. You simply don’t need all of the cruft that comes along with those images. If you build a Node image for your application using the official Node image, it will be a minimum of 643 MB because that’s the size of the official Node image.

I created a simple Hello World Node app and built it on top of the official Node image and it weighs in at 644MB.

That’s huge! My app is less than 1 MB with dependencies and the Node.js runtime is ~20MB, what’s taking up the other ~620 MB?? We must be able to do better.

What is a Microcontainer?

A Microcontainer contains only the OS libraries and language dependencies required to run an application and the application itself. Nothing more.

Rather than starting with everything but the kitchen sink, start with the bare minimum and add dependencies on an as needed basis.

Taking the exact same Node app above, using a really small base image and installing just the essentials, namely Node.js and its dependencies, it comes out to 29MB. A full 22 times smaller!

pasted_image_at_2016_01_22_11_20_am
Regular Image vs MicroImage

Try running both of those yourself right now if you’d like, docker run –rm -p 8080:8080 treeder/tiny-node:fat, then docker run –rm -p 8080:8080 treeder/tiny-node:latest. Exact same app, vastly different sizes.

Continue reading “Microcontainers – Tiny, Portable Docker Containers”