2015 Container Summit notes and learnings, part 2

Containers, VMs, & Infrastructure Part 2

Yesterday we shipped part 1 of our Container Summit notes. Today is a continuation! We’ll share a few of the other talks we enjoyed.

In this post you’ll find stories from Wall Street veterans, open source giants, and nimble challengers.

Wolf of What? Containers on Wall St.

Jake Loveless, Lucera

Jake is the CEO at Lucera, a company specializing in IaaS for Wall Street. Lucera originally began as a high frequency trading firm. Loveless noted that’s a space where speed really matters. They optimized a lot. Lucera quickly realized there was a good buck to be made in the Infrastructure world, so they switched over.

Jake likes to think about IaaS as building racetracks. In keeping with the general theme of the summit, Lucera does so 100% on containers. To make the dream a reality, they utilize SmartOS and other parts of Joyent’s stack.

In the search for speed, Lucera tried a lot of different strategies. Better colocation spots, virtual machines, etc. Compromises on price and performance popped up everywhere. Or at least they did, until Jake tried close-to-the-metal Containers.

At this point, Loveless projected an image of an iron triangle on screen. At each point of the triangle were reliability, performance, and security. Jake warned that when one ventures too far towards any point, the other two tend to suffer.

So where were containers on the triangle? It turns out, they’ve been something of a holy grail for Lucera.

Reliability

“If you can’t debug your system while it’s running, you will never write reliable software.”

Jake performed a live demo of Dtrace to show just how nice it is debugging on Lucera’s stack. He added, “You just can’t simulate some of the events that happen in production.”

For context, Loveless noted that at peaks his servers send somewhere near tens of millions of messages per second.

Security

Lucera is EAL4 certified. That’s the highest level achievable for commercial software recognized by the CCRA.

Performance

Performance turned out to be one of the keys to Lucera’s success. To showcase this, Jake shared a story about the Swiss National Bank (SNB). This was probably my favorite war story from the summit.

Case Study

Jake started with a little backstory. Crucial to this story is the The Law of One Price. Roughly stated, it says there must not exist an opportunity for arbitrage. In the case of international currency, a dollar converting into Euros converting into pounds back into dollars must return the same value.

If one currency price changes, then all prices must change in order to remove arbitrage opportunities. If this fails to happen, then the markets understandably get screwy.

Jake shared that investors have also traditionally considered the Swiss franc as a safe haven asset. In other words, buy them and you know your money is safe.

Context established, Jake delved deeper into the tale. In 2011 during the financial crisis, money poured into the Swiss franc. This caused the relative value of the franc to drop dramatically. In response to this, the SNB introduced an exchange rate peg. This fixed a floor to the euro at 1.2 Swiss francs. This meant the government would jump in and buy currency when it dipped below 1.2.

Thanks to this policy, by 2014 the SNB had amassed about $480 billion worth of foreign currency, a sum equal to about 70% of the Swiss GDP. On Jan 15th at 4:30 AM EST, the SNB decided enough was enough. Without notice, they removed the currency peg. This was intended to help deflate the debt balloon.

In Jake’s words, “This was like 4 years of pressure instantly being relieved.” For those watching closely, the opportunity was clear. A big arbitrage opportunity appeared. To paint a clear picture of magnitude, Loveless noted on that day a handful of banks had their worst electronic trading days EVER.

As for Jake’s personal experience, at 4:35AM he received a call from his usually reserved CTO. “You’ve got to see this.”

Jake logged on and saw no errors nor alerts. Had the system gone down? As he continued sifting through dashboards, he spotted 200+ instances of CPU bursting. Bursting is a SmartOS feature that occurs when a CPU is maxed out and asks to borrow a neighbor for 60 seconds. There was definitely smoke, but no signs of fire.

Jake kept a close eye on things as the day went on. Elsewhere, a panic was stirring. Less robust Wall Street infrastructures started to fall. This placed increased load on the remaining shops, which in turn caused many of them to fall. The crowding effect was the death knell for a lot of businesses that day.

For Lucera? By the end of the day, they saw not a single broken trade. Their infrastructure was able to stand up to the chaos. And as one of the few left standing, reaped great rewards. Jake noted many of their customers saw a month’s worth of trading value in a single day.

Type-C Hypervisors

Dustin Kirkland, Canonical

What is a Type-C hypervisor? Dustin Kirkland took us back to 1973 to explain, specifically to the fourth ACM Symposium on Operating Systems Principles. Even more specific, a paper published as a result of the symposium.

In the “Formal Requirements for Virtualizable Third Generation Architectures” paper, the author’s set forth the important formal definitions for what it takes to create a hypervisor + virtual machines.

Since then the world has known two types of hypervisors. Type 1 sits natively on the baremetal.
Type 2 is a machine with an actual operating system, where the hypervisor acts as more of an app.

Type 1 is like VMWare’s vSphere and Xen.
Type 2 is like KVM and VirtualBox.

LXD is the introduction of the Type C hypervisor. In Type C, each container shares the same OS as the host.

Back to the 1973 paper. There are a few simple requirements for a VM:

  • Efficiency
  • Resource Control
  • Equivalence (anything you can do in the guest, you should be able to do in the host and vice versa)

There’s also a nice to have 4th requirement: Recursively runnable virtual machines. In other words, if all this virtualization pans out, there’s no reason you shouldn’t be able to run a VM inside of a VM.

Dustin then asked an important question, “How is this different from Docker?” Docker is an application container, which solves an important problem, “How do I put one binary, and execute it in a container? How do I package an app like a static binary, and move it around like a static binary?”

LXD, in contrast, is an extension of linux containers (LXC). LXD is a machine container. They act very similar to virtual machines. The advantage of a machine container is that it packs more densely on hardware, boots up faster, and does this while still fundamentally operating like a full fledged OS.

At the end of the powerpoint, Dustin jumped into a live demonstration of LXD (and it’s ability to meet the 1973 outlined criteria). In all of the benchmarks, Dustin’s machine showed near total parity between LXD and natively running scripts.

If you’re interested in learning more about LXD visit Linuxcontainers.org.

Container Ecosystem Standards: Needs & Progress

Brandon Philips, CoreOS

Whereas most other speakers at Container Summit hopped a few decades back for history, CoreOS’s Brandon Phillips chose a more recent span. He spoke about the landscape of containers in the past year or so.

To set the stage, Phillips noted that specs and requirements are largely still an open discussion in the Docker world. What makes a good requirement? This is a question Brandon asked himself when he set out to define a standard.

The rhetorical answer is that it needs to start with the coder. Ease of use for the folks actually creating things is a big deal.

What makes Docker useful? Phillips steered to another rhetorical question, “We’ve had LXC and tarballs for years, but we didn’t have conferences about it.”

Why are we talking about containers now?

First, there’s a cryptographic digest for identifying containers. Is the container online different or the same from what I have locally? If Alice built it, I should also be able to validate that Alice built the container through a cryptographic element.

Second, a crypto identity is not enough, because humans suck at remembering things. We need human readable names. Brandon noted that despite his personal dislike for them, DNS style names work pretty well. Eg, “Com.example.app”

Third, containers must be standardized. Whether it’s Rkt or Docker, you should be able to run a container anywhere. He borrowed the analogue of a real world container. The container that fits on a ship is just as likely to fit on a train, or a trailer, etc.

The appc spec

With the question of requirements answered, Brandon journeyed back to the introduction of the appc spec last year. The release was paired with a bit of code as a demo of an actual implementation. This emerged as Rkt.

He noted that when CoreOS released this, he was hoping to see a lot of runtimes based off of the spec. The interesting thing with a spec is it enables people to build systems around it. There’s no reason for one implementation to be the world’s gateway to containers.

A spec also enables discoverability. How can I find this thing using a human readable name? Brandon sagely noted that conventions like this are important for helping folks along.

When appc was released, the great hope was the community would coalesce around the spec. Rejoice! Instead, appc caused quite a stir. That’s old news though. Since appc, CoreOS, Docker, and many others have come together to form the Open Container Initiative.

The latest happenings of which are viewable at github.com/opencontainers/spec.

Brian closed by encouraging the crowd to read through the spec. To ensure he and the rest of the Open Container Initiative make the right choices, feedback from us (the coders) is essential.

Containing the Summit

If you’re a regular on HN or r/programming then containers might seem like old news. The Container Summit was a good reminder. HN and proggit are but a small slice, of a much larger community. It’s likely that containers are  just a distant whisper to most.

Regardless of whether you buy into the hype, what’s clear is they are making an indelible impact on how we reason about computing. And, that shift is pretty cool.