2015 Container Summit notes and learnings, Part 1

Container Summit 2015

A day ago I joined 700+ folks at the Palace Hotel in San Francisco to attend the 2015 Container Summit. Container’s are young, but one thing this event made clear is the forebears have been around quite a while.

A favorite part of the summit was hearing war stories. That is, how containers are called on to get things done in the real world. There were plenty of looks to the past and the future, as well.

I learned quite a bit! What follows is part one of my notes and learnings from Container Summit.

Going Container Native

Bryan Cantrill, Joyent

Joyent is a name most know from their association with Node.js. Joyent’s CTO, Bryan Cantrill, came to the stage to discuss Joyent’s other ventures: SmartOS, Triton, and a bit of history behind containers and virtualization.

At the core of Bryan’s talk is the thought that containers are actually quite an old idea. It could be conjectured that their origins lie in chroot from 7th edition Unix. You could think of chroot as the container world’s archaeopteryx. It’s unclear whether it is or isn’t fully a container, but in chroot we begin to see the first hints of the future.

Bryan went on to explore FreeBSD’s foray into Jails, and Sun Microsystems attempt with Zones. These are much easier to argue as the first “real” entries into the container space.

For those curious, Bryan suggests grabbing a copy of the paper “Virtualization and namespace isolation in Solaris (2002).” The paper outlines five guiding principles: security > isolation > virtualization > granularity (assignment of small amounts of resources) > transparency (the API to the OS should operate as close to the same inside the Zone as outside).

Bryan then turned his attention to Hardware virtualization. In other words, virtual machines. Unsurprisingly they also have old roots. Bryan hopped in the wayback machine to visit IBM’s  development of the System 360 or S/360. It was a move meant to consolidate and improve the then available CPU instruction sets.

A rival (Honeywell) developed a machine dubbed “The Liberator”. The Honeywell machine was able to virtualize old instruction sets significantly better than the 360. This is probably the first shot fired in the hardware virtualization war.

Fast forward to VMWare pioneering HW virtualization on the x86 architecture. We’re still seeing a lot of the same drawbacks today, as we were seeing way back with the Liberator (love that name). Cantrill notes, the poor resource utilization of a VM is killer. It will by the nature of VMs, always be a killer.

Bryan revisited this point frequently. It’s quite odd that a lot of containers are stuck running on VMs (an inherently ugly + slow solution).

Docker is easily the most recognizable name in the container space. But, where did it come from? What was docker before docker? Mr. Cantrill elucidates. It was born from an internal project at the PaaS, DotCloud.

He went on to ask some incisive questions. Containers aren’t new. Why do people care all of sudden? Docker’s differentiator is it’s focus on easier to app development, not just deployment. Although, easier deployment is certainly a nice side effect.

Bryan then put forth a prediction, “Docker will do to apt what apt did to tar.” Docker is much easier to reason about than traditional app dependencies.

This was punctuated this with two portraits. One of George Stephenson and the other of Keith Tantlinger. George Stephenson standardized the width of various railroad tracks. Keith Tantlinger invented the twist-lock for stacking and securing shipping containers. Both inventions vastly improved the efficiency of trading and transportation.

Docker’s interface does something very similar. The creation and management of containers is vastly simplified. So, again, why Docker? It’s actually a mix of standardization and ease of use.

Bryan sees this standardization and usability as the gateway many more container type projects. The doors are wide open, and it’s unlikely it’ll be a winner-takes-all future.

“We’re going to see mutations, in fact we’ve already seen mutations.”

Building on all of this history, Bryan showed a quick glimpse of what he and his compatriots at Joyent have been hard at work on. Triton is a way to virtualize the Docker host. In his own words, “Triton lets you run secure Linux containers directly on bare metal via an elastic Docker host that offers tightly integrated software-defined networking.” No VMs!

If you’re interested in poking at Triton, head over to Joyent’s site for a tour.

Bryan closed with a thought; The container revolution should change how we think about computing.

Production ready Containers

David Lester, Twitter

David Lester from Twitter hopped on stage next. Dave’s talk focused on Twitter’s move from an enormous Ruby on Rails codebase, to tech like Mesos and Aurora. To most it’s still new tech, but Twitter’s rigorously battle-tested it over 4 years.

For the uninitiated, what is Mesos? Mesos is a project born from research at UC Berkeley. The goal is to improve resource utilization and enable software to share resources in a finer-grained, elastic, manner.

In other words, in traditional architectures, the quanta is the server. Mesos allows finer slicing and dicing of resources. Underlying resources like CPU and RAM are abstracted. Classic issues like utilizing heterogeneous resources vanish.

That said, switching any architecture is expensive. What spurred Twitter’s move away from Rails?

Spikes
During an airing of Castle in the Sky, a particular word was said in the Japanese version. As the Japanese audience simultaneously tweeted about it, they managed to set a record for Tweets per second. Lester then asked, how does your infrastructure handle gigantic spikes? The response response from the crowd was more than a bit of uncomfortable shifting in the seats.

Lester noted that The Castle in the Sky is a good example of an unpredictable spike.

In 2010, Twitter saw a predictable type of spike. The FIFA World Cup saw more than a few “fail whales” (Twitter’s endearing term for their error page). As goals were scored, people would tweet, and as a result servers would fall down.

It was around this time period that Twitter decided something needed to change.

Solutions
Solution 1: throw more machines at the problem. (doesn’t work for Castle in the sky situations)
Solution 2: improve the scalability of the system.

Dave notes that there were a few core needs, as well. Twitter is comprised of hundreds of engineers. They would need a solution that could scale infrastructure-wise, but also across all the engineers working on it.

Isolation of failures and feature development is a big help in this arena. Additionally, microservices allowed small teams to gain ownership over specific pieces of the stack.

Twitter’s stack is Mesos, Aurora, Finagle
Dave says this led them to their eventual design. Mesos as “the nervous system for the datacenter” It monitors, but can’t make decisions. It’s a nice abstraction between nodes and apps.

For their decision maker, Dave says they chose Aurora. It’s a scheduler, which in the nervous-system metaphor would be the brain.

Lastly, Dave mentioned that Finagle is a way to help the disparate pieces communicate nicely.

Despite the love Mr.Lester showed the above projects, he also shared a word of caution,
“All the projects you’ll be hearing about today, want to rule the world. Nobody wants to be a second class citizen.”

In other words, none of these platforms are neutral. None of the platforms that other speakers share are either. All have biases in how developers should interact with them. Mesos, Kubernetes, OpenStack, Docker, etc.

Dave believes many of these ecosystems will need to compromise and work together going forward. Specific to Twitter’s past and future search for tools, they look at a few indicators.

First, what are the right abstractions for software to integrate?
Second, What are the best interfaces for developers?

Future abstractions and interfaces
Twitter’s current architecture is a scheduler-resource-allocation model.

Dave doesn’t believe that’s the final answer to infrastructure. Nor is it a one size fits all solution. Mesos abstracts the whole datacenter. Mesos makes it appear as one giant computer. That’s beautiful.

But, we can look to other projects like Kubernetes, Docker, etc. These are also powerful + highly extensible tools, that may lead to new abstractions.

  • This led Dave to ask some open questions about the hurdles future tools need to jump.
    What are the common interfaces for tomorrow?
  • Is it a CLI or a UI? What fits best in each world?
  • Will they vary locally vs in the cloud? A lot of tools for local machines don’t do too hot in the cloud, and vice versa.
  • Maybe containers like Docker will evolve, and take on more Mesos like features. Swarm? It’s the first hint.

Dave closed by reiterating the thought, “Everybody wants to rule the world.” He also believes the way we define these interfaces and abstractions will determine how quickly we arrive at a collaborative, better, future.

End of Part 1

Thus end my notes from the first bit of Container Summit. We’ll have more headed your way tomorrow, in Part 2.

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.